Detection Of Movement Onset Using EMG Signals For Upper-limb .

Transcription

Trigili et al. Journal of NeuroEngineering and 512-1(2019) 16:45RESEARCHOpen AccessDetection of movement onset using EMGsignals for upper-limb exoskeletons inreaching tasksEmilio Trigili1*† , Lorenzo Grazi1†, Simona Crea1,2, Alessandro Accogli1, Jacopo Carpaneto1, Silvestro Micera1,3,Nicola Vitiello1,2† and Alessandro Panarese1†AbstractBackground: To assist people with disabilities, exoskeletons must be provided with human-robot interfaces andsmart algorithms capable to identify the user’s movement intentions. Surface electromyographic (sEMG) signalscould be suitable for this purpose, but their applicability in shared control schemes for real-time operation ofassistive devices in daily-life activities is limited due to high inter-subject variability, which requires customcalibrations and training. Here, we developed a machine-learning-based algorithm for detecting the user’s motionintention based on electromyographic signals, and discussed its applicability for controlling an upper-limbexoskeleton for people with severe arm disabilities.Methods: Ten healthy participants, sitting in front of a screen while wearing the exoskeleton, were asked to performseveral reaching movements toward three LEDs, presented in a random order. EMG signals from seven upper-limbmuscles were recorded. Data were analyzed offline and used to develop an algorithm that identifies the onset of themovement across two different events: moving from a resting position toward the LED (Go-forward), and going backto resting position (Go-backward). A set of subject-independent time-domain EMG features was selected according toinformation theory and their probability distributions corresponding to rest and movement phases were modeled bymeans of a two-component Gaussian Mixture Model (GMM). The detection of movement onset by two types ofdetectors was tested: the first type based on features extracted from single muscles, whereas the second from multiplemuscles. Their performances in terms of sensitivity, specificity and latency were assessed for the two events with aleave one-subject out test method.Results: The onset of movement was detected with a maximum sensitivity of 89.3% for Go-forward and 60.9% forGo-backward events. Best performances in terms of specificity were 96.2 and 94.3% respectively. For both eventsthe algorithm was able to detect the onset before the actual movement, while computational load wascompatible with real-time applications.Conclusions: The detection performances and the low computational load make the proposed algorithm promisingfor the control of upper-limb exoskeletons in real-time applications. Fast initial calibration makes it also suitablefor helping people with severe arm disabilities in performing assisted functional tasks.Keywords: Upper-limb exoskeleton, Electromyography, Human-robot interface, Onset detection* Correspondence: emilio.trigili@santannapisa.it†Emilio Trigili and Lorenzo Grazi contributed equally to this work.†Nicola Vitiello and Alessandro Panarese share the senior authorship.1The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, ItalyFull list of author information is available at the end of the article The Author(s). 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link tothe Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication o/1.0/) applies to the data made available in this article, unless otherwise stated.

Trigili et al. Journal of NeuroEngineering and Rehabilitation(2019) 16:45BackgroundExoskeletons are wearable robots exhibiting a closephysical and cognitive interaction with the human users.Over the last years, several exoskeletons have beendeveloped for different purposes, such as augmentinghuman strength [1], rehabilitating neurologically impairedindividuals [2] or assisting people affected by many neuromusculoskeletal disorders in activities of daily life [3]. Forall these applications, the design of cognitive HumanRobot Interfaces (cHRIs) is paramount [4]; indeed, understanding the users’ intention allows to control the devicewith the final goal to facilitate the execution of theintended movement. The flow of information from thehuman user to the robot control unit is particularly crucialwhen exoskeletons are used to assist people with compromised movement capabilities (e.g. post-stroke or spinal-cord-injured people), by amplifying their movements with thegoal to restore functions.In recent years, different approaches have been pursuedto design cHRIs, based on invasive and non-invasiveapproaches. Implantable electrodes, placed directly intothe brain or other electrically excitable tissues, record signals directly from the peripheral or central nervous systemor muscles, with high resolution and high precision [5].Non-invasive approaches exploit different bio-signals:some examples are electroencephalography (EEG) [6],electrooculography (EOG) [7], and brain-machine interfaces (BMI) combining the two of them [8–10]. Inaddition, a well-consolidated non-invasive approach isbased on surface electromyography (sEMG) [11], whichhas been successfully used for controlling robotic prostheses and exoskeletons due to their inherent intuitivenessand effectiveness [12–14]. Compared to EEG signals,sEMG signals are easy to be acquired and processed andprovide effective information on the movement that theperson is executing or about to start executing. Despitethe above-mentioned advantages, the use of surface EMGsignals still has several drawbacks, mainly related to theirtime-varying nature and the high inter-subject variability,due to differences in the activity level of the muscles andin their activation patterns [11, 15], which requires customcalibrations and specific training for each user [16]. Forthese reasons, notwithstanding the intuitiveness of EMGinterfaces, it is still under discussion their efficacy and usability in shared human-machine control schemes forupper-limb exoskeletons. Furthermore, the need for significant signal processing can limit the use of EMG signalsin on-line applications, for which fast detection isparamount. In this scenario, machine learning methodshave been employed to recognize the EMG onset in realtime, using different classifiers such as Support VectorMachines, Linear Discriminant Analysis, Hidden MarkovModels, Neural Networks, Fuzzy Logic and others [15–17].In this process, a set of features is previously selected inPage 2 of 16time, frequency, or time-frequency domains [18]. Time-domain features extract information associated to signal amplitude in non-fatiguing contractions; when fatigue effectsare predominant, frequency-domain features are morerepresentative; finally, time-frequency domain featuresbetter elicit transient effects of muscular contractions.Before feeding the features into the classifier,dimensionality reduction is usually performed, to increaseclassification performances while reducing complexity[19]. The most common strategies for reduction are: i)feature projection, to map the set of features into a newset with reduced dimensionality (e.g., linear mappingthrough Principal Component Analysis); ii) featureselection, in which a subset of features is selectedaccording to specific criteria, aimed at optimizing achosen objective function. All the above-mentioned classification approaches ensure good performance undercontrolled laboratory conditions. Nevertheless, in order tobe used effectively in real-life scenarios, smart algorithmsmust be developed, which are able to adapt to changes inthe environmental conditions and intra-subject variability(e.g. changes of background noise level of the EMGsignals), as well as to the inter-subject variability [20].In this paper, we exploited a cHRI combining sEMGand an upper-limb robotic exoskeleton, to fast detect theusers’ motion intention. We implemented offline anunsupervised machine-learning algorithm, using a set ofsubject-independent time-domain EMG features, selected according to information theory. The probabilitydistributions of rest and movement phases of the set offeatures were modelled by means of a two-componentGaussian Mixture Model (GMM). The algorithm simulates an online application and implements a sequentialmethod to adapt GMM parameters during the testingphase, in order to deal with changes of backgroundnoise levels during the experiment, or fluctuations inEMG peak amplitudes due to muscle adaptation orfatigue. Features were extracted from two different signalsources, namely onset detectors, which were tested offlineand their performance in terms of sensitivity (or truepositive rate), specificity (or true negative rate) and latency(delay on onset detection) were assessed for two differentevents, i.e. two transitions from rest to movement phasesat different initial conditions. The two events wereselected in order to replicate a possible applicationscenario of the proposed system. Based on the results weobtained, we discussed the applicability of the algorithmto the control of an upper-limb exoskeleton used as anassistive device for people with severe arm disabilities.Materials and methodsExperimental setupThe experimental setup includes: (i) an upper-limb poweredexoskeleton (NESM), (ii) a visual interface, and (iii) a

Trigili et al. Journal of NeuroEngineering and Rehabilitation(2019) 16:45commercial EMG recording system (TeleMyo 2400R,Noraxon Inc., AZ, US).NESM upper-limb exoskeletonNESM (Fig. 1a) is a shoulder-elbow powered exoskeletondesigned for the mobilization of the right upper limb[21, 22], developed at The BioRobotics Institute ofScuola Superiore Sant’Anna (Italy). The exoskeletonmechanical structure hangs from a standing structureand comprises four active and eight passive degrees offreedom (DOFs), along with different mechanisms forsize regulations to improve comfort and wearability ofthe device.The four active DOFs are all rotational joints and aremounted in a serial kinematic chain. Four actuationunits, corresponding to the four active DOFs, allow theshoulder adduction/abduction (sAA), flexion/extension(sFE) and internal/external rotation (sIE), and the elbowflexion/extension (eFE). Each actuation unit is realizedwith a Series Elastic Actuation (SEA) architecture [23],employing a custom torsional spring [24] and two absolute encoders, to measure the joint angle and the jointtorque as explained in [21]. SEAs allow reducing thePage 3 of 16mechanical stiffness of the actuator and easy implementation of position and torque controls.The NESM control system runs on a real-time controller, namely a sbRIO-9632 (National Instruments,Austin, TX, US), endowed with a 400 MHz processorrunning a NI real-time operating system and a fieldprogrammable gate array (FPGA) processor XilinxSpartan-3. The high-level layer runs at 100 Hz, whereasthe low-level layer runs at 1 KHz.NESM control modesLow-level controlThe low-level layer allows the exoskeleton to be operatedin two control modalities, namely joint position and jointtorque control modes. In the position control mode, eachactuator drives the joint position to follow a referenceangle trajectory: this control mode is used if the arm ofthe user has no residual movement capabilities and needsto be passively guided by the exoskeleton. If the user hasresidual movement capabilities but is not able to entirelyperform a certain motor task, the exoskeleton can becontrolled in torque mode: each actuation unit cansupply an assistive torque to help the user accomplishFig. 1 a Experimental setup, comprising NESM, EMG electrodes and the visual interface; b Location of the electrodes for EMG acquisition; cTiming and sequence of action performed by the user during a single trial

Trigili et al. Journal of NeuroEngineering and Rehabilitation(2019) 16:45the movement; we refer to transparent mode when nulltorque is commanded as reference. Both control modesare implemented by means of closed-loop controllers,independent for each actuation unit. Controllers areproportional-integrative-derivative (PID) regulators,operating on the error between the desired controlvariable (angle or torque) and the measured controlvariable (joint angle or joint torque). Safety checks areimplemented when switching from one control modeto the other, in order to avoid undesired movements ofthe exoskeleton.High-level controlThe high-level layer implements the control strategies toprovide the movement assistance. A graphical user interface (GUI) has been implemented in LabVIEW environment. The GUI allowed to (i) set the desired controlmode and control parameters, (ii) visualize joint angles,torques and EMG signals, (iii) launch the visual interface, and (iv) save data. NESM high-level controller alsoimplements a gravity compensation algorithm to counteract the gravity torque due to the exoskeleton weight. Amore detailed description of the control modes and theirperformances can be found in [21, 22].Visual interfaceA visual interface (Fig. 1a) displayed three LEDs (west W, center - C, and east - E) for the reaching movements,placed on different positions on a computer screen (15 cmapart, at left, center, and right, respectively). The visualinterface was implemented in LabVIEW and launched bythe NESM GUI.EMG recording and acquisition systemEMG signals from seven muscles of the right shoulder(Trapezius, Anterior and Posterior Deltoid), arm (Bicepsand Triceps Brachii) and forearm (Flexor and ExtensorCarpi Ulnaris) were amplified (1000x) and band pass-filtered (10–500 Hz) through a TeleMyo 2400R system(Noraxon Inc., AZ, US). The location of the electrodes isshown in Fig. 1b. The sbRIO-9632 interfaced theTeleMyo analog output channels: EMG signals weresampled by the FPGA layer at 1 kHz and sent to thereal-time layer for visualization and data storage.SubjectsA total of 10 healthy subjects (8 male, 2 female, age26 5 years) participated in the experiment, and theyall provided written informed consent. The procedureswere approved by the Institutional Review Board atThe BioRobotics Institute, Scuola Superiore Sant’Annaand complied with the principles of the declaration ofHelsinki.Page 4 of 16Experimental protocolUpon arrival, subjects were prepared for the experiment.Participants wore a t-shirt and were prepared for the application of the EMG electrodes over the skin accordingto the recommendations provided by SENIAM [25].Then, subjects wore the exoskeleton with the help of theexperimenter, and the size regulations were adjusted tofit the user’s anthropometry. The subjects sat in front ofa screen showing the visual interface, having the centerof the right shoulder aligned with the central LED, inorder to allow symmetric movements toward left andright LEDs.Seven sessions per subject were performed, each consisting of 24 reaching movements, with 5 min of restbetween sessions to avoid muscular fatigue. The targets(i.e. the LEDs) were presented in random order. For eachreaching trial, the subjects were instructed to:– keep a resting position as long as all the LEDs wereturned off,– as soon as one LED turned on, move the armtowards it and touch the screen,– keep the position (touching the screen) as long asthe LED was turned on,– as soon as the LED turned off, move back to theresting position.Each trial was set to a duration of T 12 s; within thisduration, the LED was turned on for TON 6 s (Fig. 1c).When the LED turned ON, the exoskeleton controlmode was automatically set to transparent mode, toallow the subject to start the movement and reach thetarget. After TR1 2.5 s the control mode was automatically set to position control for a duration of TR2 3.5 s;notably TR1 was set long enough to ensure subjectscould reach the target. When the LED turned OFF,subjects were asked to flex the elbow until the eFEmeasured torque exceeded the threshold τthr 2 N m;this value was used to discriminate a voluntary action ofthe user, to switch again the exoskeleton control modeto transparent mode and let the subject move the arm backto the resting position. The LED was off for TOFF 6 s andthen a new trial was started.EMG data processing and features extractionThe EMG signals were hardware-filtered on theNoraxon TeleMyo device with high-pass and anti-aliasinglow-pass filters for all channels, to achieve a pass bandbetween 10 and 500 Hz. Digital signals were then converted to analog by the Noraxon TeleMyo and sent tothe analog-digital converter of the NESM FPGA layer,operating at a sampling frequency of 1 kHz. Althoughthe cut-off frequency of the anti-aliasing filter was closeto the theoretical Nyquist frequency, it was the best

Trigili et al. Journal of NeuroEngineering and Rehabilitation(2019) 16:45filtering options available with our hardware setup. Foroffline analysis, an additional high-pass filter (Butterworth,4th order) with a cut-off frequency of 10 Hz was necessaryto remove low-frequency components from data collectedfrom the FPGA. Notch filter at 50 Hz was then used toeliminate residual powerline interference. We considered14 time-domain features to extract information from theEMG signals [26]. Features were computed within asliding window of 300 ms (10 ms update interval). Adescription of the features and their mathematical formulation can be found in the Appendix.Motion intention detectionFor each trial, within each reaching movement, theEMG signals were segmented into two phases: rest,corresponding to the phase in which the upper limb waskept still in the initial resting position, and movement,corresponding to the phase in which the upper limb wasmoving towards or was voluntarily touching the target.This transition from rest to movement was defined asthe Go-forward event.A similar approach was adopted for retracting movements. The EMG signals were segmented into twophases: rest, corresponding to the phase in which theupper limb was held fixed near the target by the exoskeleton (in position control) and movement, correspondingto the phase in which the upper limb was moving (ortrying to move, when the exoskeleton was in positioncontrol) to return to the initial resting position. Thetransition from rest to movement was defined as theGo-backward event. Figure 2a shows, for a representative subjects, kinematic and kinetic data used for thediscrimination of the two events, together with the rawEMG signals for two representative muscles.To detect both events, the probability distribution ofeach feature corresponding to rest and movement phaseswas modeled by a Gaussian Mixture Model (GMM), inwhich the density function of each feature is a linearmixture of two Gaussian curves, each representing thedistribution of that feature within a given phase.GMM training phaseThe parameters of the two-components GMM were estimated using an unsupervised approach based on theExpectation Maximization (EM) algorithm [27]. TheGMM probability density function is given by:ðx μÞ2rest 12pðx; λM Þ ¼ wrest pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 2σ rest2πσ 2restðx μmov Þ2 1þ wmov pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 2σ 2mov2πσ 2movor, equivalently:ð1ÞPage 5 of 16pðx; λM Þ ¼ wrest pðxjrest; λM Þþ wmov pðxjmov; λM Þð2Þwhere μrest and μmov are the means, and σ 2rest and σ 2movare the variances of the Gaussian distribution for thegiven phase rest and movement, respectively. The parameters wrest and wmov represent the a priori distribution oftask in rest/movement phases. The modeling probleminvolves estimating the parameter set λM ¼ fwrest ; wmov ;μrest ; μmov ; σ 2rest ; σ 2mov g from a training window of M Lsamples of the observed signal.Given the tra

Results: The onset of movement was detected with a maximum sensitivity of 89.3% forGo-forward and 60.9% for Go-backward events. Best performances in terms of specificity were 96.2 and 94.3% respectively. For both events the algorithm was able to detect the onset before the actual movement, while computational load was