Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evaluating the versatility of EEG models generated from motor imagery tasks: An exploratory investigation on upper-limb elbow-centered motor imagery tasks

  • Xin Zhang,

    Roles Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Menrva Research Group, Schools of Mechatronic Systems Engineering and Engineering Science, Simon Fraser University, Metro Vancouver, British Columbia, Canada

  • Xinyi Yong,

    Roles Writing – review & editing

    Affiliation Menrva Research Group, Schools of Mechatronic Systems Engineering and Engineering Science, Simon Fraser University, Metro Vancouver, British Columbia, Canada

  • Carlo Menon

    Roles Conceptualization, Funding acquisition, Project administration, Supervision, Writing – review & editing

    cmenon@sfu.ca

    Affiliation Menrva Research Group, Schools of Mechatronic Systems Engineering and Engineering Science, Simon Fraser University, Metro Vancouver, British Columbia, Canada

Abstract

Electroencephalography (EEG) has recently been considered for use in rehabilitation of people with motor deficits. EEG data from the motor imagery of different body movements have been used, for instance, as an EEG-based control method to send commands to rehabilitation devices that assist people to perform a variety of different motor tasks. However, it is both time and effort consuming to go through data collection and model training for every rehabilitation task. In this paper, we investigate the possibility of using an EEG model from one type of motor imagery (e.g.: elbow extension and flexion) to classify EEG from other types of motor imagery activities (e.g.: open a drawer). In order to study the problem, we focused on the elbow joint. Specifically, nine kinesthetic motor imagery tasks involving the elbow were investigated in twelve healthy individuals who participated in the study. While results reported that models from goal-oriented motor imagery tasks had higher accuracy than models from the simple joint tasks in intra-task testing (e.g., model from elbow extension and flexion task was tested on EEG data collected from elbow extension and flexion task), models from simple joint tasks had higher accuracies than the others in inter-task testing (e.g., model from elbow extension and flexion task tested on EEG data collected from drawer opening task). Simple single joint motor imagery tasks could, therefore, be considered for training models to potentially reduce the number of repetitive data acquisitions and model training in rehabilitation applications.

Introduction

Several BCIs are based on electroencephalography (EEG). EEG measures the electric brain activity caused by the flow of electric currents during the synaptic excitations of the dendrites in the neurons. [1]. Recently, research on EEG controlled system has become particularly active, as EEG measurement is non-invasive and easy to set up [26].

Different EEG-based control approaches have been explored in different populations to assist individuals to reacquire the basic abilities for communication [7] and mobility (e.g., control of neuroprostheses [810] and wheelchairs [11]). Recently, research groups have also explored the use of EEG controlled systems in stroke rehabilitation, in order to encourage users to be actively engaged during the rehabilitation process [3][12]. A current challenge is to develop EEG controlled systems for a large number of tasks with high accuracy [4]. To overcome this problem, the building of binary classification models for each task has been investigated [13]. However, repetitively acquiring EEG data and building EEG models for each task does require considerable effort on the part of the user and is also time-consuming. A possible solution is to build a general EEG model based on EEG data of a specific movement, which can be reused in different but similar training tasks (general model approach, GM for short).

Motor imagery is a common method for EEG controls in the literature [4][14]. Motor imaginary can be either goal-oriented or be related to a single joint. Goal-oriented motor imagery refers to imagery on context-specific movements, such as grasping a glass of water for drinking or eating with a spoon [15]. On the other hand, single joint motor imagery, as referred to in this paper, consists of imagining a single joint movement that is not goal-oriented or has a specific meaningful purpose. Examples of single joint motor imagery include imagining flexing or extending the elbow, the wrist, or another joint without grasping an object or any specific function [15].

Studies have shown that practice of goal-oriented tasks after stroke produces long-lasting cortical reorganization compared to traditional stroke rehabilitation[15][16][17]. Additionally, Boyd et al. demonstrated that goal-oriented task training with the hemiparetic arm resulted in both functional reorganization of both motor cortices and a larger motor learning-related change after stroke[18].

Despite the importance of goal-oriented tasks in stroke rehabilitation, most existing EEG controlled systems were developed to perform simple movements rather than goal-oriented tasks (see Table 1). Only a few studies considered goal-oriented tasks (e.g. Frisoli, A. et al. [19], Royer, AS. et al.[20], Min, BK. et.al[21]).

thumbnail
Table 1. Examples of different EEG control setup and tasks used in the literature.

https://doi.org/10.1371/journal.pone.0188293.t001

Recent literature has shown that the motor imagery (MI) of goal-oriented movements is better than non-goal-oriented movements in terms of achieving higher EEG control accuracy [13]. However, in practical rehabilitation applications, participants would have to spend time and effort in repetitive data acquisition and model training for each different goal-oriented task. On the other hand, the use of a GM could potentially drastically reduce the training time as the training would be done on a single task. However, it is not known whether an EEG model trained using the EEG signals of the motor imagery of a single upper extremity movement (e.g., elbow flexion and extension) could be used to classify the motor imagery of similar other movements (e.g., opening a door, combing hair, placing a ball into a basket, etc.). To the best of the authors’ knowledge, it is also not known which movement would work best to generate the GM. The investigation into a model can be reused in different training tasks is an important problem to be addressed especially in EEG controlled rehabilitation applications, where each goal-oriented movement is generally functionally different from the others.

The main goal of this exploratory study is to determine which motor imagery task is the most suitable to make the EEG model versatile during EEG acquisition, i.e. have the highest inter-task test accuracy. Specifically, the versatility of nine different motor imagery tasks was considered in this paper. In this context, versatility means that the EEG model generated from one specific motor imagery task leads to good performance when tested on the EEG data of other motor imagery tasks. In this study, six classification methods were used to generate the EEG models of the nine predefined motor imagery tasks. Then, the EEG data from other eight motor imagery tasks were used to test the inter-task test accuracy of the EEG model. Finally, a statistical analysis was performed to determine which motor imagery task was the most versatile when used as a GM.

Given the complexity of the problem, this exploratory study focuses only on upper-extremity movements to simplify the investigation. Specifically, all the tasks were selected to be centered on the elbow joint.

Methods

All the methods within this study were in compliance with the Declaration of Helsinki. The study was also approved by the Simon Fraser University (SFU) Office of Research Ethics.

In this study, 12 participants (aged 20–33 years old, 10 males and 2 females) agreed to join the study. All the participants signed informed consent forms before taking part in the experiment. Each individual was seated in front of a computer monitor, which provided a simple Graphical User Interface (GUI) that displayed pictures or cues to the participant.

Experimental protocol

A 32-channel, EGI Geodesic N400 system (Electrical Geodesics Inc., Eugene, OR, USA) was used to acquire the EEG data from the participants. EEG data were amplified and recorded at a sampling rate of 1 kHz. The electrode contact sites are shown in Fig 1. 17 channels were used in this study, as the remaining channels were located on the face (the EGI cap does not allow to re-position the electrodes). All participants were requested to wear the EGI sensor net for approximately 40 minutes during this experiment. During the experiment, the participants could take a break if desired.

thumbnail
Fig 1. Contact montage of the EEG system in the experiment, 17 channels was used.

Cz was defined as the reference contact by the EGI system, COM was the common ground contact.

https://doi.org/10.1371/journal.pone.0188293.g001

EEG data were collected using the Stimulus Presentation mode in BCI2000[45]. During Stimulus Presentation, customized pictures were shown on the screen while the EEG signals were recorded and filtered with a bandpass filter of 0.1–40 Hz. In this study, the pictures for ten different tasks were randomly selected and displayed on the screen. These pictures are presented in Fig 2. The participants were asked to repetitively perform the kinaesthetic motor imagery task displayed on the screen for 4 seconds without actually moving. Kinaesthetic motor imagery means that the participants were required to perform imaginary movement by focusing on imagining the sensation of the movement[46].

thumbnail
Fig 2.

Picture of the tasks that were used in the Stimulus Presentation tasks where: (a)Rest Task, rest and stay alerted; (b)Elbow Task, imagine elbow flexion and extension; (c)Drawer Task, imagine opening and closing a drawer; (d)Soup Task, imagine drinking soup with a spoon; (e)Weight Task, imagine lifting and putting down a dumbbell; (f)Door Task, imagine opening and closing a door; (g)Plate Task, imagine cleaning a plate; (h)Comb Task, imagine combing hair; (i)Pizza Task, imagine cutting a pizza with a pizza cutter; and (j) Pick &Place Task, imagine picking up a ball and put it into a basket.

https://doi.org/10.1371/journal.pone.0188293.g002

In this study, nine motor imagery tasks were chosen as upper limb movements. Tasks were selected to primarily involve the elbow joint. These motor imagery tasks can be divided into three main categories: 1) simple joint task that do not have any context meaning. In this paper, we chose Elbow Task, Drawer Task, and Weight Task; 2) simple elbow joint tasks that are commonly executed in daily life and require a relatively low level of synergy of other joints. In this paper we chose Door Task, Plate Task, and Comb Task; and 3) goal-oriented tasks, which require trajectory planning and multiple joint synergies. In this paper, we chose Soup Task, Pizza Task, and Pick&Place Task. The specific instructions given to the participants with respect to the ten tasks are summarized below:

  1. Rest (Fig 2(A)): rest while looking at the center of the cross;
  2. Elbow task (Fig 2(B)): kinaesthetically imagine flexing and extending the elbow of the dominant arm;
  3. Drawer task (Fig 2(C)): kinaesthetically imagine opening and closing a drawer with the dominant hand;
  4. Soup task (Fig 2(D)): kinaesthetically imagine getting a spoonful of soup and drinking the soup using the dominant hand;
  5. Weight task (Fig 2(E)): kinaesthetically imagine lifting and putting down a dumbbell with the dominant hand;
  6. Door task (Fig 2(F)): kinaesthetically imagine opening and closing door with the dominant hand on the door knob;
  7. Plate task (Fig 2(G)): kinaesthetically imagine cleaning a plate with only elbow extension and flexion movement;
  8. Comb task (Fig 2(H)): kinaesthetically imagine combing hair with the dominant hand.
  9. Pizza task (Fig 2(I)): kinaesthetically imagine cutting a pizza with a pizza cutter with the dominant hand;
  10. Pick&Place Task (Fig 2(J)): kinaesthetically imagine picking a ball and placing it into a basket with the dominant hand.

During the Stimulus Presentation, each picture was displayed on the screen for 4–6 seconds, followed by 4–6 seconds of rest, and the timing was randomized by the software in order to prevent participants from adapting. When the picture was displayed on the screen, the participant was requested to perform motor imagery of the corresponding task repetitively for 1–2 repetitions. For each participant, the test consisted of 15 consecutive runs. Each run consisted of 4 Rest, 4 Elbow Tasks and 16 other tasks (2 for each of the remaining tasks). Each run lasted for approximately 3 minutes. Each participant was requested to complete 15 runs and he/she could rest for as long as was needed between two runs. The participants were required to follow the stimulus on the screen. While the picture was on the screen, the participants were required to perform the respective tasks repetitively for 2–3 repetitions. As in many MI studies reported in the literature, electromyography (EMG) was not recorded [47][48][49]. To ensure compliance to the protocol, we had one observer monitor the participants to ensure they were not moving during the task. In the case of the slightest movement, the recorded data were disregarded, and the participant was asked to repeat the experiment.

Participants

Twelve healthy participants, aged between 20 and 33 participated in this study. Their demographic data are presented in Table 2.

Feature extraction and classification

The data acquired were analyzed using BCILAB[50], a BCI toolbox based on Matlab. The data were first resampled at 250 Hz. Then, a finite impulse response (FIR) bandpass filter was used to filter out the 6–35 Hz frequency band. By band-pass filtering, the data, ocular artifacts and other undesired frequency components of the EEG data were minimized. This frequency band covers the mu and beta rhythms, which have been reported to desynchronize during motor imagery [51]. According to the literature, the band power changes of the mu and beta rhythms have been used in BCI systems to classify EEG signals related to motor imagery [5254]. Those activities are localized in the mu (7–13 Hz) and beta bands (13–30 Hz). Therefore, band power (BP) of a certain band frequency can be used as a basic feature for classification [51,55]. However, ERD/ERS signals could be overlapped in time and space by multiple signals from different brain tasks. For this reason, in some cases, it may not be sufficient to use simple methods such as a band pass filter to extract the desired band power. The literature suggests that spatial filters, like common spatial pattern (CSP), could be appropriate [56]. The performance of spatial filters is dependent on its operational frequency band. Therefore, we also included filter bank CSP (FBCSP) to avoid this potential problem [57,58].

As each participant had a different reaction time to the stimulus, nine different epoch periods were extracted from the EEG data to find out the optimal epoch that led to the best EEG control performance. The different epochs used are presented in Table 3.

In this paper, BP[59], CSP[53] and FBCSP [57] were used as feature extraction algorithms to extract features, for each EEG epoch. Detailed information is presented in Table 4.

The features were then sent to classifiers. Since we wanted to evaluate the influence of different motor imageries in this paper, classifiers were limited with basic classifiers. In this study, linear discriminant analysis (LDA) and dual-augmented lagrangian (DAL) method were used for classification. All the classifiers were regularized during training. For LDA, analytical covariance shrinkage was used for regularization [60]. For DAL, dual-spectral logistic norm was used for regularization, with grid searching λ from 2−15 to 210, the step size was 2 times [61]. A binary classifier was generated for the EEG features obtained from Rest Task data and one of the Tasks (b)-(j) respectively. A 5×5 cross-validation method was used to validate the performance of the classifiers.

We used 3 features (i.e. BP, CSP, and FBCSP) and 2 classifiers (LDA, DAL) which resulted in 6 models per epoch for each participant. We considered 9 epochs, which resulted in 54 different models (3×2×9 = 54). We selected the best model for each motor imagery task for each participant. Each participant performed 9 different tasks, and we invited 12 participants. We, therefore, obtained 108 models in total (9×12 = 108). By doing this, we set a uniform objective classification standard for all nine different motor imagery tasks. The performance of the models from these motor imagery tasks is presented in the following sections.

Model training and testing

The main goal of the work was to assess the versatility of the EEG models derived from different motor imagery tasks. We studied this in the inter-task problem, where the model generated from one type of motor imagery task was tested with data from another motor imagery task. The data were collected to investigate this inter-task problem. Specifically, 30 trials (T) for each of the 9 motor imagery tasks (i.e. T1 -T9) were collected. For each task, the data were randomized. Furthermore, 60 trials of rest were recorded. After randomization, they were divided in two groups: training (RTR) and testing (RTE). Therefore, a total number of 330 trials (i.e. 30 trials × 9 motor imagery tasks + 30 rest for training (RTR) + 30 rest for testing (RTE)) were recorded.

During training, 9 two-class models were created for each participant. Each model, corresponding to a single task, was trained using the 30 trials of rest (RTR) collected for training purposes (class 1) + the 30 trials related to the single task in question (class 2). Specifically, Model 1 (m1_INTER) was trained using T1 and RTR, model 2 (m2_INTER) was trained using T2 and RTR, etc. Table 5 shows the training datasets for each model. A 5-fold cross-validation was used to generate the models during training.

thumbnail
Table 5. Data usage in training models for inter-task problem.

https://doi.org/10.1371/journal.pone.0188293.t005

For testing, each model was tested with data collected for the other models. Specifically, m1 was tested with 8 testing datasets, the first being T2+RTE, the second being, T3+RTE, the third T4+RTE, etc. Table 6 shows the data usage in testing datasets.

Before running the inter-task problem, the authors wanted to ensure that the considered BP/CSP/FBCSP+LDA/DAL method was a suitable method for the motor imagery tasks considered. Therefore, an intra-task problem was first addressed. In this case, each task had to be tested with data collected from the same motor imagery task (e.g. a model trained with T1 could not be tested with T2 as for the inter-task case as T1 and T1 were datasets related to different tasks, thus not suitable for the intra-task case). For this reason, each of the 30 trials was divided in training and testing datasets for the intra-task case. Specifically, 24 trials of each motor imagery task (e.g. T1_TR) together with 24 trials of Rest Task (Rintra_TR) were used for training. The remaining six trials of the same motor imagery task (e.g. T1_TE) together with 6 trials of Rest Task (Rintra_TE) were used for testing. Table 7 shows the training and testing dataset for each model.

thumbnail
Table 7. Training and testing datasets for the intra-task problem.

https://doi.org/10.1371/journal.pone.0188293.t007

The coefficient of determination (R2 value)

The coefficient of determination (R2 value) is a statistical measure computed over a pair of sample distributions, which measures how strongly the means of the two distributions differ in relation to variance [62]. In a BCI context, the R2 value is computed over signals that have been measured under two different task conditions. It represents the fraction of the total signal variance caused by different tasks [62]. It is a measure of how well the task condition is reflected in the brain activities [62].

The R2 value at each electrode location was computed for all participants and all combinations of different tasks in order to investigate the topographical distribution on the scalp of the difference between rest and the other imaginary tasks. The frequency that generated the highest R2 value was used to generate the topography. The 6-32Hz frequency component was considered for this representation as motor imagery was investigated.

Results

This section reports the results of the intra-task problem to assess the validity of the BP/CSP/FBCSP+LDA/DAL method before addressing the inter-task problem which is the main focus of this work.

Inter-task problem: Cross-validation results using the training dataset

For the inter-task problem the models were generated according to Table 5. Fig 3 summarizes the distribution of the feature algorithms and classifiers used to obtain the model. Among all the features and classifiers, CSP together with LDA was the most common combination: it took 35% of all the 108 models. BP feature with LDA contributed 30% to all the models.

thumbnail
Fig 3. Distribution of the classification method of the highest cross-validation accuracy.

https://doi.org/10.1371/journal.pone.0188293.g003

The cross-validation accuracy achieved for each of the nine EEG models and participants is shown in Table 8. This table reports the cross-validation accuracy with the highest value obtained from the optimal combination of the epoch period, feature extraction method and the classifier discussed earlier.

thumbnail
Table 8. 5x5 cross-validation accuracy for each participant.

https://doi.org/10.1371/journal.pone.0188293.t008

As shown in Table 8, the task with the highest cross-validation accuracy was subject-specific. H10 achieved the highest mean cross-validation accuracy (0.935±0.033) among the participants. This participant achieved the highest cross-validation accuracy for the Pick&Place Task (0.997± 0.023). H6, on the other hand, had the lowest cross-validation accuracy (0.739±0.037). The motor imagery task with the highest average cross-validation accuracy is Comb task (0.792± 0.160). Fig 4 shows the 5×5 cross-validation accuracy averaged across participants. The cross-validation accuracy ranges from 0.793±0.062 to 0.847±0.076, with the Pizza Task having the highest cross-validation accuracy and the Drawer Task having the lowest mean cross-validation accuracy. One-way analysis of variance (ANOVA) was used to check the cross-validation accuracy difference among different tasks, no statistical difference was found (p = 0.536).

thumbnail
Fig 4. Mean 5×5 cross-validation accuracy for different motor imagery tasks.

https://doi.org/10.1371/journal.pone.0188293.g004

Inter-task problem: Testing result

The models were generated and tested as described in Table 6 for testing the results of the inter-task problem. The test accuracy obtained from the inter-task test is summarized in Table 9. More specifically, the model for each motor imagery task was tested on 30 trials of eight other motor imagery tasks. For example, the model generated from Elbow Task was tested with EEG data from all the other tasks, but not from Elbow Task. All test accuracies for all EEG models were greater than 0.5. Table 9 also shows that Weight Task model has the highest average inter-task test accuracy. More specifically, it has the highest average accuracy when tested on data from other motor imagery tasks.

The mean values reported in the last column of Table 9 summarize the averaged inter-task test accuracy for models generated from the nine motor imagery tasks. This indicates the ability of the models to classify EEG data from other motor imagery tasks. The mean values reported in the last row of Table 9 summarize the averaged inter-task test accuracy for EEG data from the nine motor imagery tasks, which indicates the versatility of EEG data for the nine motor imagery tasks. The mean model test accuracy ranges from 0.543±0.023 to 0.605±0.022. The model generated from the Weight task data has the highest mean inter-task test accuracy, while the model generated from Plate Task data has the lowest mean test accuracy. The mean data test accuracy ranges from 0.553±0.025 to 0.620±0.022. The data from Elbow Task has the highest mean inter-task test accuracy and the data from Drawer Task has the lowest mean inter-task test accuracy.

A Shapiro-Wilk parametric hypothesis test was performed to test the normality of the test accuracies for different task data in Table 9. The test accuracies for models Drawer, Spoon, Plate, Pizza, Pick&Place are not normally distributed (their p values are 0.030, 0.002, 0.030, 0.012, and 0.006 respectively). Kruskal-Wallis test showed the inter-task test accuracy is statistically different (p = 2.6×10−5), see Fig 5.

thumbnail
Fig 5. Box plot for the Kruskal-Wallis test result for the inter-task testing accuracy.

https://doi.org/10.1371/journal.pone.0188293.g005

In the post-hoc analysis, Dunn & Sidák’s approach was used [63]. The model from the Weight Task has statistically higher inter-task test accuracy, compared to the model from the Spoon Task, Door Task, Plate Task, and Pizza Task(p<0.05). No statistical difference was found among Elbow Task, Drawer Task, and Weight Task (p>0.05), see Table 10.

thumbnail
Table 10. Dunn & Sidák post-hoc analysis of the inter-task testing accuracy.

Checkmarks indicate models whose inter-task accuracies are significantly different (p<0.05).

https://doi.org/10.1371/journal.pone.0188293.t010

Coefficient of determination analysis result

The averaged R2 value for different tasks is shown in Fig 6. One of our participants (H5) was left handed. The channels of his EEG were therefore flipped between left and right hemisphere in this analysis.

thumbnail
Fig 6. EEG R2 analysis for different motor imagery tasks, averaged among participants.

(a) R2 value mapping for Rest Task vs Elbow Task; (b) R2 value mapping for Rest Task vs Drawer Task;(c) R2 value mapping for Rest Task vs Soup Task;(d) R2 value mapping for Rest Task vs Weight Task;(e) R2 value mapping for Rest Task vs Door Task(f) R2 value mapping for Rest Task vs Plate Task;(g) R2 value mapping for Rest Task vs Comb Task;(h) R2 value mapping for Rest Task vs Pizza Task;(i) R2 value mapping for Rest Task vs Pick&Place Task. Motor imagery related activities with high R2 value was labeled with a black box.

https://doi.org/10.1371/journal.pone.0188293.g006

From Fig 6, we can see that most of the EEG activities are located in central and parietal lobe area. Most of the EEG activities for different motor imagery tasks (at C3 channel) are located around 12-20Hz. The peak activities for all the motor imagery tasks were always centered around 18Hz in C3 and P3 channel. Also, some activities were found in the F8 channel between 6-16Hz, which might be related to the motor planning [64,65]. Since all these two activities were both been seen around 16Hz, the topography analysis of 16Hz is shown in Fig 7, with H10, who had the highest cross-validation accuracy during the training among our participants.

thumbnail
Fig 7. Topographical distribution of R2 value for H10 at 16Hz.

(1) R2 value for Rest vs Elbow Task;(2) R2 value for Rest vs Drawer Task; (3) R2 value for Rest vs Soup Task; (4) R2 value for Rest vs Weight Task; (5) R2 value for Rest vs Door Task; (6) R2 value for Rest vs Plate Task; (7) R2 value for Rest vs Comb Task; (8) R2 value for Rest vs Pizza Task; (9) R2 value for Rest vs Pick & Place Task.

https://doi.org/10.1371/journal.pone.0188293.g007

In Fig 7, large R2 values are observed at electrode locations near the contralateral motor cortex area in all the motor imagery tasks. This was a result of the event-related desynchronization of the beta rhythms when motor imagery tasks were executed. The strength of activation and the topographical distribution, however, were different from task to task.

For H10, the topographical distributions for Rest vs Elbow Task and Rest vs Spoon Task are similar (see Fig 7(2) and 7(3)). Similar topographical distribution was observed in Door Task and Plate Task (Fig 7(5) and 7(6)), as well as Pizza Task and Pick&Place Task (Fig 7(8) and 7(9)). Especially, in Fig 7(8) and 7(9), while imagining to perform the Pizza Task and Pick&Place Task, EEG activity was recorded in the frontal lobe area (F8 channel), which might be related to the motor planning activities in complex motor imaginary tasks. These similarities suggested fundamental brain activity connections in performing some imagination tasks.

Assessing the validity of the BP/CSP/FBCSP+LDA/DAL method during intra-task testing

For the intra-task problem, the models were generated and tested as described in Table 6. Although we performed a 5-fold cross validation in the training, we only reported the testing accuracy to keep the manuscript concise. The classification accuracy for each motor imagery task was averaged across participants (see Fig 8).

thumbnail
Fig 8. Average intra-task test accuracies for different motor imagery tasks.

https://doi.org/10.1371/journal.pone.0188293.g008

As shown in Fig 8, the Pick&Place task had the highest average intra-task test accuracy (0.715±0.148) among all the motor imagery tasks, followed by Elbow task (0.711±0.128). However, the difference between different tasks is not statistically significant (one-way ANOVA, p = 0.817). The door task, on the other hand, had the lowest average intra-task test accuracy (0.618±0.186). The average intra-tasks testing result shows the test accuracy was significantly higher than random (accuracy higher than 0.6359, p = 0.05 according to Muller-putz et al. [66]), except for the door task. All tasks showed higher accuracy than chance level (accuracy higher than 0.6141, p = 0.1).

Discussions

In Fig 2, all the nine motor imagery tasks focused on upper extremity activities, centered around elbow joint movement. These tasks can arguably be divided into three main categories: i) simple joint tasks (SJM, i.e. Fig 2(B), Fig 2(C) and Fig 2(E)); ii) simple elbow joint that are commonly executed in everyday life and require a relatively low level of synergy of other joints (DSJM, i.e. Fig 2(F), Fig 2(G) and Fig 2(H)); and iii) and goal-oriented tasks (GOM, i.e. Fig 2(D), Fig 2(I) and Fig 2(J)), which require trajectory planning and multi-joint synergy.

The EEG performance varied across participants and the type of motor imagery task. GOM tasks such as Pick&Place Task and Pizza Task had a significantly higher accuracy compared to the SJM tasks. However, not all GOM tasks investigated in this study had higher cross-validation accuracy (e.g., Soup Task). In the Pizza Task and the Pick&Place Task, some activities were found from the F8 channel in lower frequency, which might be related to the motor planning activity [50][51]. More precise neural recordings would be needed to verify the brain region involved in order to confirm the activities in these tasks. However, it is surprising to see the Soup Task did not inducing similar activities in the same frequency band (in Fig 6(C)). This phenomenon may be due to the task design. We can see from Fig 6(C) that the highest R2 value is located in the O2 area, which suggests the Soup Task may be primarily related to vision/target related activity[67].

In the R2 analysis, the peak R2 value for the SJM tasks is generally smaller, and the contrast of the R2 mapping is lower than DSJM and GOM tasks. The “low-contrast” feature may result in the lower accuracy in cross-validation and intra-task test for models generated from the SJM tasks. While the difference is not statistically significant, this “low-contrast” feature might be a general pattern for upper extremity motor imagery. This could explain why the SJM tasks have higher inter-task test accuracy among all the other tasks (i.e. the EEG model generated from the SJM tasks are more versatile). For the SJM tasks, only the elbow joint was involved. All the three SJM tasks were similar. The only difference was the resistance feedback in these tasks. For example, in the Weight Task, because of the imagination of the weight, the Weight Task showed higher P3 activities than C3 activities. That might explain why the EEG model from the Weight Tasks exhibited higher versatility than DSJM and GOM tasks. For the Weight Task, there was only a 6% mean accuracy decrease between testing with data from its own task and the other tasks.

It is interesting to see how imagined interaction with other objects induces parietal lobe activities[68], such as the R2 value mapping varies in Elbow Task and Weight Task. The movement is physically almost the same, however, by just imaging a dumbbell in the hand excites brain activities around the P3 area.

It is also important to investigate the possibility of multi-class classification using the tasks mentioned in this paper in the future.

Conclusion

In this study, we found that EEG models generated from single joint movements motor imagery tasks show higher versatility than other tasks. Among all the tested tasks, the Weight Task showed a statistically higher versatility than the other tasks (p<0.05) with the average inter-task testing accuracy was 0.605±0.022. Also, the other two single joint motor imagery tasks (i.e. Elbow Task and Drawer Task) showed higher versatility compared to non-single joint tasks. However, the difference was not statistically significant (p>0.05). The inter-task testing accuracy for the Elbow Task and Drawer Tasks was 0.594±0.022 and 0.590±0.022, respectively. Among the single joint motor imagery tasks, the difference was not statistically significant (ANOVA, p>0.05). For applications like rehabilitation, it would be possible for the individuals to go through an EEG training session that only involves the motor imagery of simple one-joint movements. The EEG model generated could then be re-used to classify different other goal-oriented motor imagery tasks.

References

  1. 1. Baillet S, Mosher JC, Leahy RM. Electromagnetic brain mapping. IEEE Signal Process Mag. 2001;18: 14–30.
  2. 2. Wolpaw JR. Brain-computer interfaces as new brain output pathways. J Physiol. 2007;579: 613–619. pmid:17255164
  3. 3. Silvoni S, Ramos-Murguialday A, Cavinato M, Volpato C, Cisotto G, Turolla A, et al. Brain-Computer Interface in Stroke: A Review of Progress. Clin EEG Neurosci. 2011;42: 245–252. pmid:22208122
  4. 4. Nicolas-Alonso LF, Gomez-Gil J. Brain computer interfaces, a review. Sensors (Basel). 2012;12: 1211–1279. pmid:22438708
  5. 5. Choi I, Rhiu I, Lee Y, Yun MH, Nam CS. A systematic review of hybrid brain-computer interfaces: Taxonomy and usability perspectives. PloS one. 2017.
  6. 6. Remsik A, Young B, Vermilyea R, Kiekoefer L, Abrams J, Evander Elmore S, et al. A review of the progression and future implications of brain-computer interface therapies for restoration of distal upper extremity motor function after stroke. Expert Rev Med Devices. 2016;4440: 17434440.2016.1174572.
  7. 7. Sellers EW, Vaughan TM, Wolpaw JR. A brain-computer interface for long-term independent home use. Amyotroph Lateral Scler. 2010;11: 449–455. pmid:20583947
  8. 8. Müller-Putz GR, Pfurtscheller G. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans Biomed Eng. 2008;55: 361–364. pmid:18232384
  9. 9. Wang H, Li T, Huang Z. Remote control of an electrical car with SSVEP-Based BCI. Proceedings 2010 IEEE International Conference on Information Theory and Information Security, ICITIS 2010. 2010. pp. 837–840. https://doi.org/10.1109/ICITIS.2010.5689710
  10. 10. Meng J, Zhang S, Bekyo A, Olsoe J, Baxter B, He B. Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks. Sci Rep. 2016;6: 38565. pmid:27966546
  11. 11. Müller SMT, Bastos TF, Filho MS. Proposal of a SSVEP-BCI to command a robotic wheelchair. J Control Autom Electr Syst. 2013;24: 97–105.
  12. 12. Ang KK, Guan C. Brain-Computer Interface in Stroke Rehabilitation. J Comput Sci Eng. 2013;7: 139–146.
  13. 13. Yong X, Menon C. EEG Classification of Different Imaginary Movements within the Same Limb. PLoS One. 2015;10: e0121896. pmid:25830611
  14. 14. He B, Baxter B, Edelman BJ, Cline CC, Ye WW. Noninvasive brain-computer interfaces based on sensorimotor rhythms. Proc IEEE. 2015;103: 907–925.
  15. 15. Hubbard IJ, Parsons MW, Neilson C, Carey LM. Task-specific training: Evidence for and translation to clinical practice. Occupational Therapy International. 2009. pp. 175–189. pmid:19504501
  16. 16. Kwakkel G, Wagenaar RC, Koelman TW, Lankhorst GJ, Koetsier JC. Effects of Intensity of Rehabilitation After Stroke: A Research Synthesis. Stroke. 1997;28: 1550–1556. pmid:9259747
  17. 17. Krebs HI, Volpe B, Hogan N. A working model of stroke recovery from rehabilitation robotics practitioners. J Neuroeng Rehabil. 2009;6: 6. pmid:19243615
  18. 18. Boyd LA, Vidoni ED, Wessel BD. Motor learning after stroke: Is skill acquisition a prerequisite for contralesional neuroplastic change? Neurosci Lett. 2010;482: 21–25. pmid:20609381
  19. 19. Frisoli A, Loconsole C, Leonardis D, Banno F, Barsotti M, Chisari C, et al. A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks. IEEE Trans Syst Man Cybern Part C Appl Rev. 2012;42: 1169–1179.
  20. 20. Royer AS, He B. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms. J Neural Eng. 2009;6: 16005. pmid:19155552
  21. 21. Min BK, Chavarriaga R, Mill??n J del R. Harnessing Prefrontal Cognitive Signals for Brain-Machine Interfaces. Trends in Biotechnology. 2017. pmid:28389030
  22. 22. Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc Natl Acad Sci U S A. 2004;101: 17849–17854. pmid:15585584
  23. 23. Meng F, Tong K, Chan S, Wong W, Lui K, Tang K, et al. BCI-FES training system design and implementation for rehabilitation of stroke patients., 2008 Ijcnn 2008. 2008; 4103–4106. https://doi.org/10.1109/IJCNN.2008.4634388
  24. 24. Buch E, Weber C, Cohen LG, Braun C, Dimyan MA, Ard T, et al. Think to move: A neuromagnetic brain-computer interface (BCI) system for chronic stroke. Stroke. 2008;39: 910–917. pmid:18258825
  25. 25. Daly JJ, Cheng R, Rogers J, Litinas K, Hrovat K, Dohring M. Feasibility of a new application of noninvasive Brain Computer Interface (BCI): a case study of training for recovery of volitional motor control after stroke. J Neurol Phys Ther. 2009;33(4): 203–211. pmid:20208465
  26. 26. Gu Y, Dremstrup K, Farina D. Single-trial discrimination of type and speed of wrist movements from EEG recordings. Clin Neurophysiol. International Federation of Clinical Neurophysiology; 2009;120: 1596–1600. pmid:19535289
  27. 27. Prasad G, Herman P, Coyle D, McDonough S, Crosbie J. Applying a brain-computer interface to support motor imagery practice in people with stroke for upper limb recovery: a feasibility study. J Neuroeng Rehabil. 2010;7: 60. pmid:21156054
  28. 28. Tan HG, Kong KH, Shee CY, Wang CC, Guan CT, Ang WT. Post-acute stroke patients use brain-computer interface to activate electrical stimulation. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC’10. 2010. pp. 4234–4237. https://doi.org/10.1109/IEMBS.2010.5627381
  29. 29. Ang KK, Guan C, Chua KSG, Ang BT, Kuah C, Wang C, et al. Clinical study of neurorehabilitation in stroke using EEG-based motor imagery brain-computer interface with robotic feedback. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC’10. 2010. pp. 5549–5552. https://doi.org/10.1109/IEMBS.2010.5626782
  30. 30. Broetz D, Braun C, Weber C, Soekadar SR, Caria A, Birbaumer N. Combination of brain-computer interface training and goal-directed physical therapy in chronic stroke: a case report. Neurorehabil Neural Repair. 2010;24: 674–679. pmid:20519741
  31. 31. Tam WK, Tong KY, Meng F, Gao S. A minimal set of electrodes for motor imagery BCI to control an assistive device in chronic stroke subjects: A multi-session study. IEEE Trans Neural Syst Rehabil Eng. 2011;19: 617–627. pmid:21984520
  32. 32. Gomez-Rodriguez M, Peterst J, Hin J, Schölkopf B, Gharabaghi A, Grosse-Wentrup M. Closing the sensorimotor loop: Haptic feedback facilitates decoding of arm movement imagery. Conference Proceedings—IEEE International Conference on Systems, Man and Cybernetics. 2010. pp. 121–126. https://doi.org/10.1109/ICSMC.2010.5642217
  33. 33. Shindo K, Kawashima K, Ushiba J, Ota N, Ito M, Ota T, et al. Effects of neurofeedback training with an electroencephalogram-based brain-computer interface for hand paralysis in patients with chronic stroke: A preliminary case series study. J Rehabil Med. 2011;43: 951–957. pmid:21947184
  34. 34. Ortner R, Irimia D-C, Scharinger J, Guger C. A motor imagery based brain-computer interface for stroke rehabilitation. Stud Health Technol Inform. 2012;181: 319–23. pmid:22954880
  35. 35. Kaiser V, Kreilinger A, Müller-Putz GR, Neuper C. First steps toward a motor imagery based stroke BCI: New strategy to set up a classifier. Front Neurosci. 2011; pmid:21779234
  36. 36. Cincotti F, Pichiorri F, Arico P, Aloise F, Leotta F, De Vico Fallani F, et al. EEG-based brain-computer interface to support post-stroke motor rehabilitation of the upper limb. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2012. pp. 4112–4115. https://doi.org/10.1109/EMBC.2012.6346871
  37. 37. Vuckovic A, Sepulveda F. A two-stage four-class BCI based on imaginary movements of the left and the right wrist. Med Eng Phys. 2012;34: 964–971. pmid:22119365
  38. 38. Ramos-Murguialday A, Broetz D, Rea M, Läer L, Yilmaz Ö, Brasil FL, et al. Brain-machine interface in chronic stroke rehabilitation: A controlled study. Ann Neurol. 2013;74: 100–108. pmid:23494615
  39. 39. Young BM, Williams J, Prabhakaran V. BCI-FES: could a new rehabilitation device hold fresh promise for stroke patients? Expert Rev Med Devices. 2014;11: 537–9. pmid:25060658
  40. 40. Ang KK, Chua KSG, Phua KS, Wang C, Chin ZY, Kuah CWK, et al. A Randomized Controlled Trial of EEG-Based Motor Imagery Brain-Computer Interface Robotic Rehabilitation for Stroke. Clin EEG Neurosci. 2015;46: 310–320. pmid:24756025
  41. 41. Pinto RD, Ferreira HA. Development of a Non-invasive Brain Computer Interface for Neurorehabilitation. Proceedings of the 3rd 2015 Workshop on ICTs for improving Patients Rehabilitation Research Techniques. Lisbon, Portugal: ACM; 2015. pp. 126–130. https://doi.org/10.1145/2838944.2838975
  42. 42. Ibáñez J, Serrano JI, Del Castillo MD, Monge E, Molina F, Pons JL. Heterogeneous BCI-Triggered Functional Electrical Stimulation Intervention for the Upper-Limb Rehabiliation of Stroke Patients. In: Guger C, Müller-Putz G, Allison B, editors. Brain-Computer Interface Research. Springer International Publishing; 2015. pp. 67–77. https://doi.org/10.1007/978-3-319-25190-5_7
  43. 43. Elnady AM, Zhang X, Xiao ZG, Yong X, Randhawa BK, Boyd L, et al. A Single-Session Preliminary Evaluation of an Affordable BCI-Controlled Arm Exoskeleton and Motor-Proprioception Platform. Front Hum Neurosci. 2015;9: 168. pmid:25870554
  44. 44. Edelman BJ, Baxter B, He B. EEG source imaging enhances the decoding of complex right-hand motor imagery tasks. IEEE Trans Biomed Eng. 2016;63: 4–14. pmid:26276986
  45. 45. Schalk G, McFarland DJ, Hinterberger T, Birbaumer N, Wolpaw JR. BCI2000: a general-purpose brain-computer interface (BCI) system. Biomed Eng IEEE Trans. 2004;51: 1034–1043.
  46. 46. Neuper C, Scherer R, Reiner M, Pfurtscheller G. Imagery of motor actions: Differential effects of kinesthetic and visual–motor mode of imagery in single-trial EEG. Cogn Brain Res. 2005;25: 668–677. pmid:16236487
  47. 47. Pfurtscheller G, Brunner C, Schlögl A, Lopes da Silva FH. Mu rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks. Neuroimage. 2006;31: 153–159. pmid:16443377
  48. 48. Ince NF, Arica S, Tewfik A. Classification of single trial motor imagery EEG recordings with subject adapted non-dyadic arbitrary time–frequency tilings. J Neural Eng. 2006;3: 235–244. pmid:16921207
  49. 49. Ang KK, Chua KSG, Phua KS, Wang C, Chin ZY, Kuah CWK, et al. A Randomized Controlled Trial of EEG-Based Motor Imagery Brain-Computer Interface Robotic Rehabilitation for Stroke. Clin EEG Neurosci. 2015;46: 310–320. pmid:24756025
  50. 50. Christian Andreas K, Scott M. BCILAB: a platform for brain–computer interface development. J Neural Eng. 2013;10: 56014. Available: http://stacks.iop.org/1741-2552/10/i=5/a=056014
  51. 51. Pfurtscheller G, Lopes Da Silva FH. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clinical Neurophysiology. 1999. pp. 1842–1857. pmid:10576479
  52. 52. Dornhege G, Blankertz B, Krauledat M, Losch F, Curio G, Müller K-R. Optimizing spatio-temporal filters for improving brain-computer interfacing.
  53. 53. Ramoser H, Müller-Gerking J, Pfurtscheller G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans Rehabil Eng. 2000;8: 441–446. pmid:11204034
  54. 54. Wang Y, Gao S, Gao X. Common spatial pattern method for channel selelction in motor imagery based brain-computer interface. Engineering in Medicine and Biology Society, 2005 IEEE-EMBS 2005 27th Annual International Conference of the. IEEE; 2006. pp. 5392–5395. https://doi.org/10.1109/IEMBS.2005.1615701
  55. 55. Pfurtscheller G, Neuper C. Motor imagery and direct brain- computer communication. Proc IEEE. 2001;89: 1123–1134.
  56. 56. Blankertz B, Blankertz B, Tomioka R, Tomioka R, Lemm S, Lemm S, et al. Optimizing Spatial Filters for Robust\nEEG Single-Trial Analysis. IEEE Signal Process Mag. 2008;XX: 1–12.
  57. 57. Kai Keng Ang, Zheng Yang Chin, Haihong Zhang, Cuntai Guan. Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface. 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). 2008. pp. 2390–2397. https://doi.org/10.1109/IJCNN.2008.4634130
  58. 58. Ang KK, Chin ZY, Wang C, Guan C, Zhang H. Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b. Front Neurosci. 2012;6: 1–9.
  59. 59. Pfurtscheller G, Neuper C. Motor imagery and direct brain-computer communication. Proc IEEE. 2001;89: 1123–1134.
  60. 60. Schäfer J, Strimmer K. A Shrinkage Approach to Large-Scale Covariance Matrix Estimation and Implications for Functional Genomics. Stat Appl Genet Mol Biol. 2005;4. pmid:16646851
  61. 61. Tomioka R, Müller KR. A regularized discriminative framework for EEG analysis with application to brain-computer interface. Neuroimage. 2010;49: 415–432. pmid:19646534
  62. 62. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain–computer interfaces for communication and control. Clin Neurophysiol. 2002;113: 767–791. pmid:12048038
  63. 63. Hochberg Y, Tamhane AC. Multiple Comparison Procedures. Wiley Ser Probab Stat. 1987;312: 2014–2015.
  64. 64. Hanakawa T, Dimyan MA, Hallett M. Motor planning, imagery, and execution in the distributed motor network: A time-course study with functional MRI. Cereb Cortex. 2008;18: 2775–2788. pmid:18359777
  65. 65. Goldman-Rakic PS, Schwartz ML. Interdigitation of contralateral and ipsilateral columnar projections to frontal association cortex in primates. Science. 1982;216: 755–757. pmid:6177037
  66. 66. Müller-putz GR, Scherer R, Brunner C, Leeb R, Pfurtscheller G. Better than random? A closer look on BCI results. Int Jouranl Bioelectromagn. 2008;10: 52–55.
  67. 67. Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ. Principles of Neural Science, Fifth Edition [Internet]. Neurology. 2014.
  68. 68. Fogassi L, Luppino G. Motor functions of the parietal lobe. Current Opinion in Neurobiology. 2005. pp. 626–631. pmid:16271458