Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Introducing chaotic codes for the modulation of code modulated visual evoked potentials (c-VEP) in normal adults for visual fatigue reduction

  • Zahra Shirzhiyan,

    Roles Data curation, Formal analysis, Methodology, Software, Validation, Visualization, Writing – original draft

    Affiliations Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran, Research Center for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran

  • Ahmadreza Keihani,

    Roles Data curation, Formal analysis, Investigation, Methodology, Visualization

    Affiliations Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran, Research Center for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran

  • Morteza Farahi,

    Roles Data curation, Investigation, Methodology, Visualization

    Affiliations Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran, Research Center for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran

  • Elham Shamsi,

    Roles Data curation, Formal analysis, Investigation

    Affiliations Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran, Research Center for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran

  • Mina GolMohammadi,

    Roles Data curation, Investigation

    Affiliation Research Center for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran

  • Amin Mahnam,

    Roles Conceptualization, Investigation, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran

  • Mohsen Reza Haidari ,

    Roles Conceptualization, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    h_jafari@tums.ac.ir (AHJ); drmohsinraza2012@yahoo.com (MRH)

    Affiliation Section of Neuroscience, Department of Neurology, Faculty of Medicine, Baqiyatallah University of Medical Sciences, Tehran, Iran

  • Amir Homayoun Jafari

    Roles Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Writing – review & editing

    h_jafari@tums.ac.ir (AHJ); drmohsinraza2012@yahoo.com (MRH)

    Affiliations Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran, Research Center for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran

Abstract

Code modulated Visual Evoked Potentials (c-VEP) based BCI studies usually employ m-sequences as a modulating codes for their broadband spectrum and correlation property. However, subjective fatigue of the presented codes has been a problem. In this study, we introduce chaotic codes containing broadband spectrum and similar correlation property. We examined whether the introduced chaotic codes could be decoded from EEG signals and also compared the subjective fatigue level with m-sequence codes in normal subjects. We generated chaotic code from one-dimensional logistic map and used it with conventional 31-bit m-sequence code. In a c-VEP based study in normal subjects (n = 44, 21 females) we presented these codes visually and recorded EEG signals from the corresponding codes for their four lagged versions. Canonical correlation analysis (CCA) and spatiotemporal beamforming (STB) methods were used for target identification and comparison of responses. Additionally, we compared the subjective self-declared fatigue using VAS caused by presented m-sequence and chaotic codes. The introduced chaotic code was decoded from EEG responses with CCA and STB methods. The maximum total accuracy values of 93.6 ± 11.9% and 94 ± 14.4% were achieved with STB method for chaotic and m-sequence codes for all subjects respectively. The achieved accuracies in all subjects were not significantly different in m-sequence and chaotic codes. There was significant reduction in subjective fatigue caused by chaotic codes compared to the m-sequence codes. Both m-sequence and chaotic codes were similar in their accuracies as evaluated by CCA and STB methods. The chaotic codes significantly reduced subjective fatigue compared to the m-sequence codes.

Introduction

Visual evoked potentials (VEPs) are EEG responses to the visual stimuli. Brain-computer interfaces (BCI) based on these potentials are becoming popular, for their less training time and high information transfer rate (ITR) [1]. VEP-based BCI systems can be classified into three different categories: time modulated, frequency modulated and code modulated stimuli [2]. In systems with the time modulated stimuli, the sequence of target stimuli is coded in non-overlapping time windows such as P300 based BCI system. This, however, usually leads to low ITR [2]. In systems with frequency modulated stimuli, different targets are defined by their distinct frequencies that can be recognized by detecting the same target frequencies and their harmonics [2] and phase information of the evoked responses [3, 4]. In code modulated BCI systems, the pattern of flashing is determined by using a pseudo-random manner sequence such as an m-sequence [5]. In this modality the work mechanism is based on using the different shifts of modulating codes. These codes have Dirac like auto-correlation function that allows using shifted versions of modulating codes as different targets for evoking different VEPs. A simple and short calibration allows to have a specific EEG response to the m-sequence, and with that, all the targets that are lagged versions of the same m-sequence can be distinguished [2, 6].

The signals transmitted via broadband codes lead to robustness to noise and lower cross interferences of other stimuli because the auto-correlation of broadband code exhibits Dirac function [7].

Code modulated Visual Evoked potentials (c-VEPs) utilize characteristics of broadband codes as stimuli. Broadband codes have the capability of evoking the VEPs that have the appropriate auto and cross-correlation properties [8, 9]. c-VEP based BCIs could play an important role in better system performance and target identification. These also give low cross interference when high number of commands are presented simultaneously leading to significantly high ITR [10].

c-VEP evoked by different lags of non-periodic binary codes could be demodulated in brain responses such as EEG with template matching [6]. High ITR in c-VEP based BCI applications has been achieved by using canonical correlation analysis (CCA) and template matching [11]. Utilizing m-sequence code, c-VEP based BCI has been used to build a BCI system for amyotrophic lateral sclerosis (ALS) patients with significantly higher communication rate only with eye gaze [12]. It has been successfully tested in online applications such as spelling [13] with error-related potential and unsupervised learning for online adaptation and continues to be employed in the control of mobile robots [14, 15].

Novel paradigms for c-VEP based BCIs include the introduction of the generative framework for predicting the responses to gold codes [16], spatial separation and boundary positioning for decoupling of responses to different targets [17]. In addition, target identification in c-VEP based BCIs has been improved by using Support Vector Machine (SVM) method and accuracy has been increased with linear kernel [18]. However, more recently spatiotemporal beamforming (STB) method was used for target identification in c-VEP responses and was found to be significantly better than SVM [9]. Additional measures in this regard include optimization of the stimuli presentation parameters such as color and size of LED, code length, stimuli proximity, and the lag between stimuli [19] and use of dry electrodes [20]. Recent studies in c-VEP based BCIs have increased selectable targets by using different pseudo-random codes such as m-sequence code that have low cross-correlation value with each other [21, 22].

Chaos has been widely observed in various biological systems [2325]. Chaotic behavior has also been observed in several neuronal structures such as cells, synapses [26, 27], and neural networks [2832]. Chaotic dynamics has been attributed to large scale brain activities and physiological processes [33] such as information processing [34], synaptic plasticity [35], memory [36], perceptual processing and recognition of unknown odor [37] and also brain state transitions [38].

While randomness has been observed in various aspects of neural system such as rapid random fluctuation in membrane [39], spontaneous activity of neurons [40] and neural spiking [41], however, non-randomness, nonlinearity and chaotic dynamics exists at all levels of brain function from the simplest up to the complex systems [4247]. In summary, chaos provides the ability of reacting adaptively to outside world leading to new patterns and fresh ideas and contributes to the complex behavior in the brain functions [4850].

Nonlinear and chaotic dynamics of neural activities also manifest themselves in EEG signals [5156]. Nonlinear and chaotic analyses methods have been utilized in EEG signal processing [57], feature extraction and analysis in BCI applications [5860]. Deviation from the normal chaotic behavior of EEG signals is observed in neurological disorders [61, 62] such as epileptic seizures [6264], depression [65, 66], Alzheimer’s disease [53, 67] and Autism [68]. However, so far there is no study in BCI applications that has employed chaotic code in visual stimulation.

Reduction of visual fatigue has been a challenging issue in VEP based BCI applications [6971]. Continuous exposure to changes in luminance is highly uncomfortable for the users gazing it [72]. Therefore, designing the stimuli that cause less visual fatigue and discomfort could be valuable in designing a suitable and ergonomic BCI setup. An efficient encoding of visual pattern with low discomfort level occurs when images or flicker have statistical characteristics of natural sense and are more close to 1/f spectral property in temporal or spatial frequency [7375]. Interestingly, the spectrum of chaotic behavior are reported to be close to the 1/f spectral property [76, 77] as mostly seen in natural scenes and phenomena. As m-sequence codes have the inherent property of random process with flat spectrum [78, 79], using chaotic codes generated with nonlinear dynamical system for reducing visual discomfort are superior to m-sequence.

Employing chaotic behavior to generate codes in spread spectrum communication is taken into consideration from the chaotic maps that provide an infinite number of uncorrelated signals with great correlation properties [80] and suitable for Code Division Multiple Access (CDMA) modulation applications [81, 82]. Use of complimenting binary chaotic sequence also helps in generating broad band chaotic code [83]. As a result the chaotic codes have high correlation property, and using them can lead to high accuracies as m-sequences.

Despite the suitability of chaotic codes for use in c-VEP based BCI applications, so far they have not been used as visual stimuli for c-VEP generation. Therefore, in this study we used chaotic codes and widely used m-sequence codes to evoke c-VEPs in EEG signals and compared their accuracy using CCA and STB methods. In addition, we used VAS to compare subjective fatigue rates between these two codes in normal subjects.

Material and method

Study participants

Forty-Four volunteers (21 females), aged 20–33 years old (26.09 ± 3.67) with normal or corrected vision to normal (6/6) participated in this study. The subjects informed via announcement based on the notice boards of the faculties of medicine and biomedical engineering and word of mouth. Subjects with a history of visual or neurological disorders, head trauma or use any drugs that would affect nervous system function were excluded. Before the experiment began, participants signed written informed consent form and the total procedure of signal recording and experiment was described to them. The experimental protocol was approved by the office of research review board and the research ethics committee of the Tehran University of Medical Sciences.

Experimental design

Stimuli.

In this study, we used 31-bit m-sequence code that is commonly used in c-VEP based BCIs for its favorable correlation property [2] and is used in other medical fields such as studying visual receptive fields properties [84] and fMRI [85]. We generated 31-bit chaotic code using the logistic map with good auto-correlation property [83] that makes it suitable for using in CDMA based BCIs by the algorithm described as follow, bit ‘0’ presented with dark and bit ‘1’ presented with light stimulation.

Generation of chaotic code using logistic map.

Chaotic signals have the potential of designing codes that have auto-correlation close to Dirac like function so that correlation method performs well in the identification and makes it appropriate for code modulating applications [86]. The logistic map is a one-dimensional map that can most of natural phenomena and population growth of biological species [87], as defined in (1). Where x is in the interval of [0 1] and indicates the ratio of existing population to the maximum possible population, the x(0) as the initial value of x and A is the rate for reproduction and starvation that is at the interval of [0 4]. This simple map could generate chaotic dynamic in some values of parameter A generally between 3.5 to 4 [88]. An example of chaotic sequence generated from logistic map is shown in Fig 1 for A parameter equal to 3.882 and initial value x(0) equal to 0.15.

(1)
thumbnail
Fig 1. The chaotic sequence.

The sequence derived from logistic map for x (0) = 0.15 and A = 3.882.

https://doi.org/10.1371/journal.pone.0213197.g001

The algorithm of chaotic code generation is as follows and shown in Fig 2.

  1. Selection of the initial value x(0) and A parameter in Eq 1, (we chose x(0) = 0.015 and A = 3.882)
  2. Calculation of the x(i+1) from the (1).
  3. Generation of binary code from x(i+1):
    If x(i+1)>0.5 then C(n) = 0 else C(n) = 1.
  4. Taking the 1’s complement of C(n) to generate C(n+1) = C’(n).
  5. Checking the condition (i≤16). If it is satisfied then increase i by 1 and n by 2 and then proceed to the step 2, if it is not satisfied, proceed to the next step.
  6. Selection of the first 31 bits from the generated code.
thumbnail
Fig 2. Flowchart for the generation of orthogonal chaotic code.

https://doi.org/10.1371/journal.pone.0213197.g002

The auto-correlation of m-sequence and chaotic codes are shown as function of delay in Fig 3, here the delays of codes are according to samples (bits) of codes. It can be seen that the auto-correlation of m-sequence code and generated chaotic code are almost Dirac like function, so that the generated chaotic code by the proposed algorithm could be appropriate to be used in the code modulation.

thumbnail
Fig 3.

The auto-correlation of m-sequence code (top) and chaotic code (bottom). The generated chaotic code follows the correlation property which is necessary in code modulation.

https://doi.org/10.1371/journal.pone.0213197.g003

The one-sided amplitude spectrum of presented stimuli of m-sequence and chaotic codes stimuli are shown in Fig 4. Both of the stimuli are broad band. The dashed lines separate the low, medium and high frequency regions. Significant peaks of the m-sequence code are seen in low and medium frequencies. For the chaotic codes, the spectrum of stimuli shows dominant peaks in high frequencies components.

thumbnail
Fig 4. One-sided amplitude spectrum of the presented stimuli (blue: Spectrum of the m-sequence codes stimuli, red: Spectrum of the chaotic code stimuli).

Dashed lines separate Low, Medium and High frequencies. Low: frequencies from 0 to 10 HZ, Medium: frequency range between 10 to 30 Hz and High: frequencies above 30 Hz. It is obvious that compared to the m-sequence codes, the chaotic codes frequency components are less in Low and Medium frequencies and are more in High frequency range.

https://doi.org/10.1371/journal.pone.0213197.g004

Stimuli presentation paradigm.

The m-sequence and chaotic codes were presented at the rate of 90 Hz (each bit presented at 1/90 second). This is relatively higher presentation rate among the c-VEP studies as few studies have used presenting rates of between 80 Hz [18] and 120 Hz [9]. Four different versions of m-sequence and chaotic codes were generated by shifting the original code by eight bits that was temporally equal to almost 0.088 second (as shown in Fig 5). The circularly shifted versions of m-sequence and chaotic codes are shown by M1M4 and Ch1Ch4 respectively.

thumbnail
Fig 5. Time domain representation of m-sequence and chaotic codes.

Left and right columns show the m-sequence and chaotic codes respectively. M1-M4 and Ch1-Ch4 are the 4 shifted versions of m-sequences and chaotic codes respectively. Each box (shown with pink color) represents temporal shift of 0.088 second (8 bits) ahead with respect to previous one.

https://doi.org/10.1371/journal.pone.0213197.g005

The stimuli specifications are presented in Table 1. Each code presentation duration time was 0.344 seconds (single epoch). It was presented 18 times in each trial (6.2 second). One session (90 second) of stimuli presentation consisted of 10 trials in which 2 second break was considered in between the trials. Supporting data files S1 Video and S2 Video recorded by Canon 750D DSLR Camera show playback videos of m-sequence and chaotic code respectively. Each video is approximately 17 seconds of total duration and has two trials with 2 seconds break in between. Stimulus presentation started 10 seconds after the start of EEG recording. Fig 6 shows the stimuli presenting diagram for single session.

thumbnail
Fig 6. The stimuli presentation diagram for single session: A single session consisted of 10 trials presenting m-sequence or chaotic code visual stimuli.

Each trial had 18 consecutively presented epochs. Each epoch presented a single visual stimulus code. Each trial was followed by 2 second break time. As there were 4 m-sequence (M1M4) and 4 chaotic codes (Ch1Ch4), each subject had total 8 sessions of stimulus presentation (see Fig 7 and text for details).

https://doi.org/10.1371/journal.pone.0213197.g006

thumbnail
Table 1. The stimuli specifications for both m-sequence and chaotic codes.

https://doi.org/10.1371/journal.pone.0213197.t001

Subjective fatigue evaluation.

All the participants were asked to answer the self-reported questions that measured the amount of fatigue and un-comfortability of the presented stimuli after each session. For evaluation of fatigue we used VAS score [89]. Before the start of session, the subjects were guided to report their fatigue rate by considering their tiredness of gazing the stimuli and how much they felt uncomfortable with the stimuli. At the end of each session, the subjects were asked to give the score (VAS score) between 0 for no fatigue at all and 10 if they were extremely fatigued. For avoiding the effect of cumulative fatigue by the previously presented stimuli, we let the subjects to have rest time of 2 minutes duration in between the sessions and then if the subject answered ‘No’ to the question “Do you need more time for rest?” we continued to record another session.

The order of presentation of the four shifted versions of m-sequence (M1-M4) and chaotic codes (Ch1-Ch4), comprising of total 8 stimuli codes, was random for all subjects. The random distribution of presentation sequence of each stimulus code in all 8 sessions helped to avoid the influence of bias caused by possible cumulative fatigue in our analysis. The time sequence of eight sessions of stimuli presentation with EEG recording and subjective fatigue evaluation is shown in Fig 7.

thumbnail
Fig 7.

Time sequences of activities of m-sequence (A) and chaotic code (B) presentation sessions. Each subject was presented with eight sessions. Each session started with 10 second rest and EEG recording and consisted of 10 trials. Each trial consisted of 18 epoch of consecutive m-sequence or chaotic codes presented with 2 seconds break after each trial. At the end of each session subjective fatigue rate was evaluated. The order of presentation of eight sessions for each subject was random.

https://doi.org/10.1371/journal.pone.0213197.g007

Signal recording setup.

EEG signal was recorded using g.USBAmp with sampling rate of 4800 Hz. Four active g.lady bird electrodes were placed at Oz, O1, O2, and Pz positions on scalp where the visual evoked potentials such as c-VEP have maximum amplitude [11, 18]. Fpz and right earlobe were used as the ground and reference electrodes respectively as shown in Fig 8. An online band pass filter with cutoff frequencies of 0.05Hz and 120 Hz was applied.

thumbnail
Fig 8. EEG recording electrodes placement according to 10–20 system.

Four active g.laddy bird electrodes were placed in Oz, O1, O2 and Pz. A2 and Fpz were selected as the reference and ground electrodes respectively.

https://doi.org/10.1371/journal.pone.0213197.g008

All the stimuli were generated using MATLAB software (Release 2016b, The MathWorks, 193 Inc, Massachusetts, United States) and presented to a custom-made DAC board and LED driver (shown with stimulator in Fig 9). The LED used in this study was square shaped and red colored with size of 4 × 4 cm2 and was placed almost 70 cm from subject.

thumbnail
Fig 9. Signal recording set up: Stimuli were selected from stimuli presenting PC and sent to stimulator for presentation to subject via LED screen.

NI DAQ was used to record the trigger pulse coming from g.USBAmp and optic sensor output and also the stimulator box. The data was sent to PC for further analysis.

https://doi.org/10.1371/journal.pone.0213197.g009

An optical sensor (Texas Instruments) was used to record the light of stimuli presented via LED and the National Instruments (NI) DAQ was used for recording the trigger pulse coming from digital Input-Output port of g.USBAmp signal recording amplifier that indicated the beginning of EEG recording. Additionally, Optic sensor output and analog output from LED driver were recorded in NI DAQ for synchronizing the stimuli presentation and EEG recording simultaneously. Finally the recorded EEG signal from g.USBAmp and NI DAQ were sent to a personal computer for further analysis. The onset of EEG recording and stimuli presentation were detected from recorded data in NI DAQ and the lag time between the two actions of start of EEG recording and start of stimuli presentation was identified. The EEG signal recorded after lag time was used for analysis. The beginning time of a trial was detected by the triggering pulse that came from LED driver at the beginning of trial.

The EEG recording and stimuli presentation set-up is shown in Fig 9. Details of the signal recording setup is reported in our previous study [90].

Preprocessing.

The trigger pulse from g.tec and optic sensor output that was recorded with NI DAQ was extracted and used for detecting and extracting synchronized trials from EEG signals. The extracted EEG signal of individual trials was filtered by zero phase shift Butterworth band pass filter with cutoff frequencies of 2 and 40 Hz with the order of 8 and detrended for baseline correction. For each trial, the epochs that corresponded to each code were extracted and finally for each stimulus, 10 trials were extracted such that each trial contained responses to 18 consecutive epochs.

Feature extraction and target identification.

Canonical correlation analysis (CCA) and spatiotemporal beamforming (STB) methods were used for feature extraction and target identification. For evaluation of feature performance, 10-fold cross-validation was used for the verification of above mentioned methods. This meant that all trials of a subject were divided into 10 folds; 9 folds were used as training data set and the remaining one fold was used as testing data. There were 10 trials for each of the four shifted codes every time when a single trial was tested during 10-fold cross-validation. Finally, the mean of 10 accuracies of target identification were reported as the final value of accuracy for each subject. All procedures were carried out for the responses to m-sequence and chaotic codes separately.

Canonical Correlation Analysis (CCA).

The CCA is a multivariable data processing that reveals the underlying correlation existing between the two multidimensional variables by maximizing the correlation of linear combination of two variables [91]. This method has been successfully used for the analysis of visual evoked potentials such as SSVEP [2, 11, 91]. CCA attempts to find the two vectors of Wx and Wy called as the canonical correlation vectors for the two multidimensional variables X and Y that maximize their canonical variant x and y which is defined respectively by x = XTWx and y = YTWy.

Wx and Wy derived by maximizing the correlation coefficient ρ: (2)

In this study, the Xm×n is template and Y is the responses. In this study the matrices of X and Y are defined as matrices sm×n and T respectively where m denote the number of channels and n is the number of samples in each epoch.

The steps of using CCA for feature extraction and target identification are as follows and also shown in Fig 10.

thumbnail
Fig 10. Schematic representation of using CCA for template generation and target identification.

Template generation included extraction of epochs for each target and averaging them to generate templates. Target identification included extraction of epochs for testing trial and calculation of the canonical correlation of generated templates from training stage and averaged epochs for generating the feature vector and finally the maximum value of feature vector were selected.

https://doi.org/10.1371/journal.pone.0213197.g010

Template generation.

  1. Extraction of the epochs in training data set where i = [1 2 …4] (the indices of i represent the i th target in m-sequence and chaotic codes separately) and k is the total number of epochs in training dataset.
  2. Averaging over k epochs to generate the which is then used as a template.

In online applications of code modulated BCIs, the templates are generally obtained for a single delay of code as a calibration target and the templates for other targets are obtained by shifting the original template [10]. In this study which was an offline study, due to the accessibility to training data set, we preferred to obtain the templates for each targets separately and therefore discontinuity introduced by circular shifting was prevented and even miniature differences between the templates were taken into consideration.

Target identification.

  1. Extraction of each target epochs of the test trial where r is the number of epochs in the single trial.
  2. Averaging over r epochs to yield the matrix
  3. Calculation of the canonical correlation of templates and to achieve the correlation vector Pim.
  4. Calculation of the mean value of correlation vector Pim to create the feature vector.
  5. Selection of the maximum value of feature vector ρi.

Spatiotemporal beamforming.

STB was initially used as the spatial filter for analyzing the radar and sonar data [92]. STB has also been used in EEG signal processing for source localization [93] and optimal estimation of ERP sources [94]. The extended form of beamforming was introduced as a STB for single trial detection of evoked potentials from meaningful stimuli (N400) [95]. Recently the researchers in VEP based BCI have used this approach for decoding the message of each stimulus from synchronized EEG with it, such as P300 based BCI [96], SSVEP based BCI [97, 98] and c-VEP based BCI [9, 99].

The procedure for using STB is described in following steps and all procedures are shown in Fig 11 [99].

thumbnail
Fig 11. Schematic of representation of using STB for building beamformers and target identification.

Building beamformers included extraction of epochs for all the targets and generation of the activation patterns for each target and in parallel calculation of covariance matrix of concatenated epochs. The beamformers were calculated from Eq 4. The target identification included multiplying the beamformers with concatenated channels of averaged epochs in testing trials for the generation of feature vector and selecting the maximum score.

https://doi.org/10.1371/journal.pone.0213197.g011

Building beamformers.

  1. Extraction of all the epochs of all the targets in training trials to create the matrix, Sih×m×n, where h is the total number of epochs of all the targets acquired from training trials data.
  2. Extraction of the epochs corresponding to each target in training data where k is the epoch’s number in training trial data for each target.
  3. Generation of the spatiotemporal activation patterns for each target Aim×n by averaging over k epochs.
  4. Concatenation of the rows of Aim×n and generating the vectors of aimn.
  5. Generation of the Xh×mn by concatenating the channels in STrh×m×n.
  6. Calculation of the covariance matrix of X for generating Σmn×mn.
  7. Generation of the beamformers wimn from:
(3)

The linearly-constrained minimum-variance (LCMV) beamformers were calculated by using the Lagrange multipliers method under constraint (3).

Note that due to the accessibility to training data, the activation patterns for each target were obtained separately for each target such as generating templates for each target in CCA method.

Target identification.

  1. Extraction of all epochs of testing trial where r is the number of epochs in the testing trial.
  2. Averaging r epochs and concatenating the channels of the averaged signal to generate smn.
  3. Calculation of yi = swi where i = [1 2…4].
  4. Selection of maximum score of y in feature vector.

Statistical analysis.

The averaged VAS scores of each stimulation (m-sequence or chaotic codes) were calculated by averaging the scores across 4 sessions (shifted versions of codes) for each subject.

For the evaluation of subjective fatigue between the m-sequence code and the chaotic code groups, the averaged VAS scores were used and for comparison, the results were expressed as mean ± SE. For analysis of within group changes, repeated measures ANOVA for m-sequence and chaotic code was carried out separately on the individual VAS scores. A Greenhouse-Geisser correction with a significance level α = 0.05 was employed for analysis of within group changes in VAS scores for m-sequence code and chaotic group VAS scores. Then the post hoc analysis with Bonferroni correction was used for each pair comparison within the m-sequence and chaotic code groups while α set as 0.008.

Wilcoxon signed ranks test was employed for the analysis of the accuracy results yielded from 10-fold cross-validation and also comparing the accuracy changes between CCA and STB results over the stimulation time of 0.344 seconds (single epoch) to 6.2 seconds (18 epochs, single trial), the threshold was set at α = 0.05 for these analyses.

Results

Figs 12 and 13 show the grand average of evoked responses to m-sequences and chaotic codes. The grand averages of response for each stimuli was calculated by averaging all epochs in 10 trails and then across all channels and finally averaged for all subjects. For illustrating the existing delay between the m-sequence responses, the auto-correlation of (response to M1) and its cross-correlation with other responses are shown in Fig 12. The similar results for the chaotic codes responses are presented in Fig 13.

thumbnail
Fig 12. Grand average and cross-correlations of evoked responses to m-sequences.

The grand average responses to codes Mi (i = 1:4) is shown with waveforms of their corresponding standard errors are shown with dotted plots (top) and the auto-correlation of response and its cross-correlation with the responses is shown with the waveforms of (bottom). The delay between responses could be decoded from cross-correlation waveforms where they are maximum.

https://doi.org/10.1371/journal.pone.0213197.g012

thumbnail
Fig 13. Grand average and cross-correlations of evoked responses to chaotic codes.

The grand average of responses to codes Chi (i = 1:4) is shown with waveforms of their corresponding standard errors are shown with dotted plots (top) and the auto-correlation of response and its cross-correlation with the responses is shown with the waveforms of (bottom). The delay between responses could be decoded from cross-correlation waveforms values (as shown) where they were maximum.

https://doi.org/10.1371/journal.pone.0213197.g013

Fig 14 shows the results of 10-fold validation over the stimulation time for m-sequence and chaotic codes. Increase in the stimulation times means the increase in the numbers of averaged epochs (code repetition) in test trials (from 1 to 18 epochs, from 0.344 to 6.2 seconds) for cross-validation.

thumbnail
Fig 14. Accuracies of target identification for the m-sequence and chaotic codes obtained from 10-fold cross-validation with CCA and STB methods over stimulation time.

Time duration for each epoch was 0.344 seconds and the total stimulation time for all 18 epochs was 6.2 seconds. The accuracy increased over stimulation time in both the methods. The dashed line shows that the STB is faster than CCA in reaching 70% accuracy.

https://doi.org/10.1371/journal.pone.0213197.g014

The accuracies of target identification for 10-fold cross-validation for full stimulation time of a trial (6.2 seconds) are reported in Table 2. The maximum mean accuracy values of 93.6 ± 11.9% and 94 ± 14.4% were achieved with STB method for chaotic and m-sequence codes for all subjects respectively.

thumbnail
Table 2. Accuracies of target identification results of the 10-fold cross-validation for a trial for m-sequence and chaotic code for all subjects.

https://doi.org/10.1371/journal.pone.0213197.t002

Statistical analysis results

Significantly higher accuracy rates were obtained by Wilcoxon signed ranks test for STB method when we compared it with the CCA method accuracy rates at different stimulation times for both m-sequence and chaotic codes. Table 3 shows the comparison of accuracies results of CCA and STB methods for different stimulation times.

thumbnail
Table 3. Statistical results for accuracy values of paired t-test in the comparison between STB and CCA methods for m-sequence and chaotic codes.

https://doi.org/10.1371/journal.pone.0213197.t003

Wilcoxon signed ranks test showed no significant changes in the accuracy rates of STB method for the target identification of a single trial for the m-sequence and chaotic code groups (Z = -1.016, p = 0.31). Additionally, no significant results were observed when the accuracies of the m-sequence and chaotic codes groups were compared using CCA method for the single trial accuracies (Z = -1.204, p = 0.22).

Between group fatigue analysis results

Chaotic codes resulted in significantly less VAS score (4.9076 ±2.1981) compared to the m-sequence codes (5.8152±2.6207) analyzed by paired t-test (t (43) = 4.054, p = 0.0005) as shown in Fig 15.

thumbnail
Fig 15. Averaged subjective fatigue scores of all m-sequence and chaotic codes of all the subjects.

The chaotic codes VAS score was significantly lower than the m-sequence codes,*p = 0.0005, n = 44.

https://doi.org/10.1371/journal.pone.0213197.g015

Within group fatigue rate analysis results

No statistical changes were seen in the analysis of within group comparison of VAS scores with repeated measures ANOVA in m-sequence group (F (1.765, 79.434) = 0.754, p = 0.45).

Repeated measures ANOVA showed significant changes in the value of VAS scores in chaotic code (F (2.523, 113.521) = 5.345, p = 0.003). Post hoc analysis using Bonferroni correction with α = 0.008, showed significant differences between Ch3 and Ch1. Mean values of VAS scores of Ch1 and Ch3 were 4.58±2.32 and 5.19±2.34 respectively (p = 0.002). No significant results for other pairs of chaotic codes were seen.

Fig 16 shows the subjective fatigue scores of individual m-sequence and chaotic codes.

thumbnail
Fig 16. Subjective fatigue scores of individual m-sequence and chaotic codes.

There was significant difference between VAS score of chaotic codes Ch1 and Ch3 (for each code n = 44 and *p = 0.002).

https://doi.org/10.1371/journal.pone.0213197.g016

Discussion

In this study, we successfully used chaotic codes to evoke c-VEPs and found that the chaotic codes significantly reduced subjective fatigue compared to the conventional m-sequence code. We showed that the proposed code was able to evoke distinctive identifiable responses in EEG comparable with the m-sequence code that is currently employed in c-VEP response generation and code modulated based BCIs.

For the first time in code modulated based studies, chaotic codes presented as visual stimuli were identified successfully from their corresponding VEPs. The four shifted versions of m-sequence and chaotic codes used in this study had 8 bits circular delays ahead of pervious code. From Figs 12 and 13, it could be seen that the imposed delays of 0.088 seconds in between the presented chaotic code stimuli similar to the m-sequence code were preserved in their corresponding grand average responses. This delay time could be observed and detected in the peaks of the auto-correlation and cross-correlation of responses to each code (Figs 12 and 13 lower panels). The time when cross-correlation function was maximum determined the existing lag time between the intended stimuli and non-shifted version of codes. For example, the lag time between the response to Ch1 (zero bit shift) and Ch2 (8 bits shift) was 0.088 seconds which is represented as 8 bits between their corresponding stimuli (note that each bit shift is 1/90 seconds).

For the target identification of c-VEP to corresponding lag times in each group (m-sequence and chaotic code), we used CCA which is a common method for c-VEP analysis. We also used STB method recently introduced for the target identification in code modulated evoked potentials [9]. By increasing the stimulation time (increasing the numbers of epochs to be averaged), the accuracies of target identification increased; for m-sequence code, the total accuracies achieved were 91.13±13.8% and 94 ± 14.4% by CCA and STB methods respectively. For chaotic codes, the total accuracies of 89.5 ± 11.7% and 93.6 ± 11.9% were achieved by CCA and STB methods respectively (Table 2). The results of data analysis showed the m-sequence and chaotic codes in target identification results had no significant differences for both methods.

Our results for m-sequence and chaotic codes show that the total accuracy was over 70% in CCA method after approximately 2 seconds (6 epochs). Also, in STB method for m-sequence and chaotic codes after approximately 1 seconds (3 epochs) and 1.5 second (4 epochs) respectively, the total accuracy was 70% (Fig 14), which is acceptable in BCI applications [100].

Our results indicate that the STB method was significantly better than CCA method especially at the shorter stimulation time for m-sequence codes (Fig 14 and Table 3). However, for the longer stimulation time, STB method was comparatively more significant than CCA method for chaotic codes (Fig 14 and Table 3). In addition, the accuracy increased faster with STB compared to the CCA method (Fig 14). Therefore we can conclude that STB is faster than CCA as the 70% accuracy was achieved sooner with it. In a previous c-VEP based study, STB method has also been shown to be better than SVM method [9].

The most important result of our study is the significant reduction of subjective fatigue in the chaotic codes group compared to the m-sequence codes (Fig 15). Reason for higher subjective fatigue in the m-sequence group is the fact that while both the codes had broad band frequency spectrum, the spectral properties of m-sequence codes used in our study were more towards lower frequency spectrum (Fig 4) which causes more subjective fatigue and visual discomfort compared to the higher frequencies visual stimuli [69, 90] and have high risk of photosensitive epileptic seizures [101].

In addition, reduction of fatigue and visual discomfort as seen in the chaotic group was because of the fact that chaotic stimuli had higher frequency spectral distribution as shown in Fig 4. Frequency components higher than 30 Hz reduce the probability of occurrence of fatigue and visual discomfort because high frequency components are hardly visible and imperceptible to human eye [69].

Additionally, visual stimuli with excessive contrast energy at medium frequencies spectrum at the range of 10 to 30 Hz such as m-sequence codes used in our study can increase the eye discomfort level [75]. It is obvious from the comparison of spectral content of m-sequence code and chaotic code as shown in Fig 4 that the m-sequence code had more dominant peaks within the medium frequency range while the chaotic codes had more dominant peaks within the frequency component higher than 30 Hz. Considering the above reasons for the significant reduction of subjective fatigue by chaotic codes, we suggest their use for designing ergonomic c-VEP based BCI applications.

Another possible reason for the significant reduction of subjective fatigue with chaotic codes used in our study is the closeness of chaotic behavior to the 1/f spectral property [76, 77] which is observed in natural scenes and phenomena. It is widely reported that most of natural phenomena exhibit the 1/f type of spectral properties [75, 102104]. Interestingly, visual system encoding is more efficient when encountering the stimuli with spatial and temporal patterns resembling 1/f amplitude spectral features [75, 105]. Visual stimuli with the above characteristics and patterns such as chaotic codes, generate sparse cortical responses in the receptive fields of neurons in the primary visual cortex [106]. As the hemodynamic responses mainly reflect the local field activity of neurons [107], the sparseness in the number of firing the neurons may lead to lesser demand for oxygenated blood and hence less fatigue.

fMRI and near infrared spectroscopy (NIRS) show that the oxygenation is more prominent when the visual stimuli are relatively uncomfortable [107, 108] as seen with the m-sequence codes that have pseudo-random behavior and flat wideband spectrum [78, 79] increasing the probability of discomfort level.

Our results of within group comparisons of individual VAS scores of m-sequences and chaotic codes show that the m-sequence (M1M4) did not cause significantly different fatigue level. However, in the chaotic code group, Ch1 code’s VAS score was significantly less than Ch3 (Fig 16). The significance value of this within group difference is very less compared to the overall difference in the fatigue level between the m-sequence and chaotic code group. We don’t have any explanation for this result and suggest further studies on chaotic codes in c-VEP based studies to find exact reason for it.

Importance of chaotic visual stimuli and suggestions for future works

Researches during last few years have shown that in several areas of visual system, information processing involves dynamical and nonlinear processes as seen in retinal ganglion cells [109, 110], retina [111], lateral geniculate nucleus [104] and visual cortex [112]. In addition, spatial integration of information in retinal ganglion cells [109, 110] and colored visual stimuli processing of primary visual cortex [113] also involve nonlinear dynamics. Visual stimuli with chaotic dynamics involve not only primary visual cortex but also parietal-occipital and parietal areas of the brain [114]. We thus suggest use of chaotic visual stimuli for future c-VEP based studies as these conform to the biological reality of nervous system. Further research is also suggested for neural processes in visual cortex on mechanisms of lesser fatigue with chaotic dynamical stimuli.

The results of this study also suggest use of chaotic codes and nonlinear analysis as it may be the underlying nonlinear dynamics in chaotic stimuli that can be decoded better than conventional analysis method used for target identification.

As this study is the first of its kind in c-VEP based investigation, our limitation was that we didn’t study effect of change of logistic map parameter on target identification accuracy and subjective fatigue values. Therefore in the future studies, we suggest optimum parameters for generating chaotic code. We also suggest use of visual stimuli that are more close to the 1 /f spectral property.

Finally, as the results of our study show that chaotic visual stimuli are identifiable by CCA and STB methods and cause less fatigue compared to the conventional m-sequence codes, we suggest further c-VEP studies using these two new and other methods for designing better CDMA based BCI in future.

Conclusion

This study for a first time examined chaotic code used for evoking c-VEP in CDMA based BCIs and compared the results with conventional m-sequence code widely used in code modulated BCIs. Our results show that the chaotic code was decoded successfully from recorded EEG responses and complied with the requirements needed for using it as a modulating code in the c-VEP generation. Better fatigue reduction was achieved by using chaotic code compared to the m-sequence code. We suggest use of chaotic code in c-VEP based studies for better application of BCI.

Supporting information

S1 File. m-sequence code presentation.

https://doi.org/10.1371/journal.pone.0213197.s001

Two consecutive trials of m-sequence code.

S2 File. Chaotic code presentation.

https://doi.org/10.1371/journal.pone.0213197.s002

Two consecutive trials of chaotic code.

Acknowledgments

Authors thank the cooperation of subjects and the staff of biomedical engineering department of Tehran University of Medical Sciences (TUMS). This work was approved by the research ethics community of the Tehran University of Medical Sciences.

References

  1. 1. Wang Y, Gao X, Hong B, Jia C, Gao S. Brain-computer interfaces based on visual evoked potentials. IEEE Engineering in medicine and biology magazine. 2008;27(5).
  2. 2. Bin G, Gao X, Wang Y, Hong B, Gao S. VEP-based brain-computer interfaces: time, frequency, and code modulations [Research Frontier]. IEEE Computational Intelligence Magazine. 2009;4(4).
  3. 3. Kluge T, Hartmann M, editors. Phase coherent detection of steady-state evoked potentials: experimental results and application to brain-computer interfaces. Neural Engineering, 2007 CNE'07 3rd International IEEE/EMBS Conference on; 2007: IEEE.
  4. 4. Lee P-L, Sie J-J, Liu Y-J, Wu C-H, Lee M-H, Shu C-H, et al. An SSVEP-actuated brain computer interface using phase-tagged flickering sequences: a cursor system. Annals of biomedical engineering. 2010;38(7):2383–97. pmid:20177780
  5. 5. Viterbi AJ, Viterbi AJ. CDMA: principles of spread spectrum communication: Addison-Wesley Reading, MA; 1995.
  6. 6. Sutter EE, editor The visual evoked response as a communication channel. Proceedings of the IEEE Symposium on Biosensors; 1984.
  7. 7. Pickholtz R, Schilling D, Milstein L. Theory of spread-spectrum communications—a tutorial. IEEE transactions on Communications. 1982;30(5):855–84.
  8. 8. Wei Q, Liu Y, Gao X, Wang Y, Yang C, Lu Z, et al. A novel c-VEP BCI paradigm for increasing the number of stimulus targets based on grouping modulation with different codes. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2018.
  9. 9. Wittevrongel B, Van Wolputte E, Van Hulle MM. Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding. Scientific reports. 2017;7(1):15037. pmid:29118386
  10. 10. Bin G, Gao X, Yan Z, Hong B, Gao S. An online multi-channel SSVEP-based brain–computer interface using a canonical correlation analysis method. Journal of neural engineering. 2009;6(4):046002. pmid:19494422
  11. 11. Bin G, Gao X, Wang Y, Li Y, Hong B, Gao S. A high-speed BCI based on code modulation VEP. Journal of neural engineering. 2011;8(2):025015. pmid:21436527
  12. 12. Sutter EE. The brain response interface: communication through visually-induced electrical brain responses. Journal of Microcomputer Applications. 1992;15(1):31–45.
  13. 13. Spüler M, Rosenstiel W, Bogdan M. Online adaptation of a c-VEP brain-computer interface (BCI) based on error-related potentials and unsupervised learning. PloS one. 2012;7(12):e51077. pmid:23236433
  14. 14. Kapeller C, Hintermüller C, Abu-Alqumsan M, Prückl R, Peer A, Guger C, editors. A BCI using VEP for continuous control of a mobile robot. Engineering in medicine and biology society (EMBC), 2013 35th annual international conference of the IEEE; 2013: IEEE.
  15. 15. Riechmann H, Finke A, Ritter H. Using a cVEP-based Brain-Computer Interface to control a virtual agent. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2016;24(6):692–9. pmid:26469340
  16. 16. Thielen J, van den Broek P, Farquhar J, Desain P. Broad-Band visually evoked potentials: re (con) volution in brain-computer interfacing. PloS one. 2015;10(7):e0133797. pmid:26208328
  17. 17. Waytowich NR, Krusienski DJ. Spatial decoupling of targets and flashing stimuli for visual brain–computer interfaces. Journal of neural engineering. 2015;12(3):036006. pmid:25875047
  18. 18. Aminaka D, Makino S, Rutkowski TM, editors. Eeg filtering optimization for code–modulated chromatic visual evoked potential–based brain–computer interface. International Workshop on Symbiotic Interaction; 2015: Springer.
  19. 19. Wei Q, Feng S, Lu Z. Stimulus specificity of brain-computer interfaces based on code modulation visual evoked potentials. PloS one. 2016;11(5):e0156416. pmid:27243454
  20. 20. Spüler M. A high-speed brain-computer interface (BCI) using dry EEG electrodes. PloS one. 2017;12(2):e0172400. pmid:28225794
  21. 21. Wei Q, Liu Y, Gao X, Wang Y, Yang C, Lu Z, et al. A novel c-VEP BCI paradigm for increasing the number of stimulus targets based on grouping modulation with different codes. 2018.
  22. 22. Liu Y, Wei Q, Lu ZJPo. A multi-target brain-computer interface based on code modulated visual evoked potentials. 2018;13(8):e0202478. pmid:30118504
  23. 23. Strogatz SH. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering: CRC Press; 2018.
  24. 24. Camazine S, Deneubourg J-L, Franks NR, Sneyd J, Bonabeau E, Theraula G. Self-organization in biological systems: Princeton University Press; 2003.
  25. 25. Saha T, Galic M. Self-organization across scales: from molecules to organisms. Phil Trans R Soc B. 2018;373(1747):20170113. pmid:29632265
  26. 26. Hoebeek FE, Witter L, Ruigrok TJ, De Zeeuw CI. Differential olivo-cerebellar cortical control of rebound activity in the cerebellar nuclei. Proceedings of the National Academy of Sciences. 2010:200907118.
  27. 27. Ishikawa T, Shimuta M, Häusser M. Multimodal sensory integration in single cerebellar granule cells in vivo. Elife. 2015;4:e12916. pmid:26714108
  28. 28. Aihara K. Chaotic Neural Networks (Bifurcation Phenomena in Nonlinear Systems and Theory of Dynamical Systems). 1989.
  29. 29. Freeman WJ. Tutorial on neurobiology: from single neurons to brain chaos. International journal of bifurcation and chaos. 1992;2(03):451–82.
  30. 30. Nobukawa S, Nishimura H. Chaotic resonance in coupled inferior olive neurons with the Llinás approach neuron model. Neural computation. 2016;28(11):2505–32.
  31. 31. Potapov A, Ali M. Robust chaos in neural networks. Physics Letters A. 2000;277(6):310–22.
  32. 32. Rössert C, Dean P, Porrill J. At the edge of chaos: how cerebellar granular layer network dynamics can provide the basis for temporal filters. PLoS computational biology. 2015;11(10):e1004515. pmid:26484859
  33. 33. Breakspear M. Dynamic models of large-scale brain activity. Nat Neurosci. 2017;20(3):340–52. Epub 2017/02/24. pmid:28230845.
  34. 34. Tsuda I. Chaotic itinerancy as a dynamical basis of hermeneutics in brain and mind. World Futures: Journal of General Evolution. 1991;32(2–3):167–84.
  35. 35. Pittorino F, Ibáñez-Berganza M, di Volo M, Vezzani A, Burioni R. Chaos and correlated avalanches in excitatory neural networks with synaptic plasticity. Physical review letters. 2017;118(9):098102. pmid:28306273
  36. 36. Tsuda I. Dynamic link of memory—chaotic memory map in nonequilibrium neural networks. Neural networks. 1992;5(2):313–26.
  37. 37. Freeman WJ. Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biological cybernetics. 1987;56(2–3):139–50. pmid:3593783
  38. 38. Rasmussen R, Jensen MH, Heltberg MLJCs. Chaotic Dynamics Mediate Brain State Transitions, Driven by Changes in Extracellular Ion Concentrations. 2017;5(6):591–603. e4. pmid:29248375
  39. 39. Hong D, Man S, Martin JV. A stochastic mechanism for signal propagation in the brain: Force of rapid random fluctuations in membrane potentials of individual neurons. J Theor Biol. 2016;389:225–36. Epub 2015/11/12. pmid:26555846.
  40. 40. Kostal L, Lansky P. Randomness of spontaneous activity and information transfer in neurons. Physiol Res. 2008;57 Suppl 3:S133–8. Epub 2008/05/17. pmid:18481907.
  41. 41. Kostal L, Lansky P, Rospars JP. Neuronal coding and spiking randomness. Eur J Neurosci. 2007;26(10):2693–701. Epub 2007/11/16. pmid:18001270.
  42. 42. Dotko P, Hess K, Levi R, Nolte M, Reimann M, Scolamiero M, et al. Topological analysis of the connectome of digital reconstructions of neural microcircuits. arXiv preprint arXiv:160101580. 2016.
  43. 43. Bassett DS, Sporns O. Network neuroscience. Nat Neurosci. 2017;20(3):353–64. Epub 2017/02/24. pmid:28230844; PubMed Central PMCID: PMCPMC5485642.
  44. 44. Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol. 2008;4(8):e1000092. Epub 2008/09/05. pmid:18769680; PubMed Central PMCID: PMCPMC2519166.
  45. 45. Korn H, Faure P. Is there chaos in the brain? II. Experimental evidence and related models. Comptes rendus biologies. 2003;326(9):787–840. pmid:14694754
  46. 46. Vegue M, Perin R, Roxin A. On the structure of cortical micro-circuits inferred from small sample sizes. Journal of Neuroscience. 2017:0984–17.
  47. 47. Shimono M, Beggs JM. Functional clusters, hubs, and communities in the cortical microconnectome. Cerebral Cortex. 2014;25(10):3743–57. pmid:25336598
  48. 48. Bob P. Chaos, cognition and disordered brain. Activitas Nervosa Superior. 2008;50(4):114–7.
  49. 49. Freeman WJ. Consciousness, intentionality and causality. Journal of Consciousness Studies. 1999;6(11–12):143–72.
  50. 50. Freeman WJ, Kozma R, Werbos PJ. Biocomplexity: adaptive behavior in complex stochastic dynamical systems. Biosystems. 2001;59(2):109–23. pmid:11267739
  51. 51. Stam C, Pijn J, Suffczynski P, Da Silva FLJCN. Dynamics of the human alpha rhythm: evidence for non-linearity? 1999;110(10):1801–13. pmid:10574295
  52. 52. Andrzejak RG, Lehnertz K, Mormann F, Rieke C, David P, Elger CEJPRE. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. 2001;64(6):061907.
  53. 53. Stam CJJCn. Nonlinear dynamical analysis of EEG and MEG: review of an emerging field. 2005;116(10):2266–301. pmid:16115797
  54. 54. Cerquera A, Vollebregt MA, Arns MJCE, neuroscience. Nonlinear Recurrent Dynamics and Long-Term Nonstationarities in EEG Alpha Cortical Activity: Implications for Choosing Adequate Segment Length in Nonlinear EEG Analyses. 2018;49(2):71–8.
  55. 55. Baravalle R, Rosso OA, Montani F. Rhythmic activities of the brain: Quantifying the high complexity of beta and gamma oscillations during visuomotor tasks. Chaos. 2018;28(7):075513. Epub 2018/08/03. pmid:30070505.
  56. 56. Acharya UR, Bhat S, Faust O, Adeli H, Chua EC-P, Lim WJE, et al. Nonlinear dynamics measures for automated EEG-based sleep stage detection. European neurology. 2015;74(5–6):268–87. pmid:26650683
  57. 57. Stam CJ. Nonlinear dynamical analysis of EEG and MEG: review of an emerging field. Clinical neurophysiology. 2005;116(10):2266–301. pmid:16115797
  58. 58. Coyle D, Prasad G, McGinnity TM. A time-series prediction approach for feature extraction in a brain-computer interface. IEEE transactions on neural systems and rehabilitation engineering. 2005;13(4):461–7. pmid:16425827
  59. 59. Gysels E, Celka P. Phase synchronization for the recognition of mental tasks in a brain-computer interface. IEEE Transactions on neural systems and rehabilitation engineering. 2004;12(4):406–15. pmid:15614996
  60. 60. Uribe LF, Fazanaro FI, Castellano G, Suyama R, Attux R, Cardozo E, et al. A Recurrence-Based Approach for Feature Extraction in Brain-Computer Interface Systems. Translational Recurrences: Springer; 2014. p. 95–107.
  61. 61. Sarbadhikari S, Chakrabarty K. Chaos in the brain: a short review alluding to epilepsy, depression, exercise and lateralization. Medical engineering & physics. 2001;23(7):447–57.
  62. 62. Birbaumer N, Flor H, Lutzenberger W, Elbert T. Chaos and order in the human brain. Electroencephalography and Clinical Neurophysiology/Supplement. 1995;44:450–9.
  63. 63. Litt B, Echauz JJTLN. Prediction of epileptic seizures. 2002;1(1):22–30. pmid:12849542
  64. 64. Amengual-Gual M, Fernández IS, Loddenkemper T. Patterns of epileptic seizure occurrence. Brain research. 2018.
  65. 65. Hosseinifard B, Moradi MH, Rostami RJCm, biomedicine pi. Classifying depression patients and normal subjects using machine learning techniques and nonlinear features from EEG signal. 2013;109(3):339–45. pmid:23122719
  66. 66. Ahmadlou M, Adeli H, Adeli AJIJoP. Fractality analysis of frontal brain in major depressive disorder. 2012;85(2):206–11. pmid:22580188
  67. 67. Besthorn C, Zerfass R, Geiger-Kabisch C, Sattel H, Daniel S, Schreiter-Gasser U, et al. Discrimination of Alzheimer's disease and normal aging by EEG data. 1997;103(2):241–8. pmid:9277627
  68. 68. Catarino A, Churches O, Baron-Cohen S, Andrade A, Ring HJCn. Atypical EEG complexity in autism spectrum conditions: a multiscale entropy analysis. 2011;122(12):2375–83. pmid:21641861
  69. 69. Volosyak I, Valbuena D, Luth T, Malechka T, Graser A. BCI demographics II: How many (and what kinds of) people can use a high-frequency SSVEP BCI? IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2011;19(3):232–9. pmid:21421448
  70. 70. Chang MH, Baek HJ, Lee SM, Park KS. An amplitude-modulated visual stimulation for reducing eye fatigue in SSVEP-based brain–computer interfaces. Clinical Neurophysiology. 2014;125(7):1380–91. pmid:24368034
  71. 71. Won D-O, Hwang H-J, Dähne S, Müller K-R, Lee S-W. Effect of higher frequency on the classification of steady-state visual evoked potentials. Journal of neural engineering. 2015;13(1):016014. pmid:26695712
  72. 72. Xie J, Xu G, Wang J, Li M, Han C, Jia Y. Effects of mental load and fatigue on steady-state evoked potential based brain computer interface tasks: a comparison of periodic flickering and motion-reversal based visual attention. PloS one. 2016;11(9):e0163426. pmid:27658216
  73. 73. Field DJ. Relations between the statistics of natural images and the response properties of cortical cells. Josa a. 1987;4(12):2379–94.
  74. 74. O’Hare L, Hibbard PB. Spatial frequency and visual discomfort. Vision research. 2011;51(15):1767–77. pmid:21684303
  75. 75. Yoshimoto S, Garcia J, Jiang F, Wilkins AJ, Takeuchi T, Webster MA. Visual discomfort and flicker. Vision research. 2017;138:18–28. pmid:28709920
  76. 76. Relano A, Gómez J, Molina R, Retamosa J, Faleiro E. Quantum chaos and 1/f noise. Physical review letters. 2002;89(24):244102. pmid:12484946
  77. 77. Molina R, Relaño A, Retamosa J, Muñoz L, Faleiro E, Gómez J, editors. Perspectives on 1/f noise in quantum chaos. Journal of Physics: Conference Series; 2010: IOP Publishing.
  78. 78. Kumar VA, Mitra A, Prasanna SM. On the effectivity of different pseudo-noise and orthogonal sequences for speech encryption from correlation properties. International journal of information technology. 2007;4(2):455–62.
  79. 79. Li X, Ritcey J. M-sequences for OFDM peak-to-average power ratio reduction and error correction. Electronics letters. 1997;33(7):554–5.
  80. 80. Heidari-Bateni G, McGillem CD. A chaotic direct-sequence spread-spectrum communication system. IEEE Transactions on communications. 1994;42(234):1524–7.
  81. 81. Tse C, Lau F. Chaos-based digital communication systems. Operating Principles, Analysis Methods and Performance Evaluation (Springer Verlag, Berlin, 2004). 2003.
  82. 82. Kurian AP, Puthusserypady S, Htut SM. Performance enhancement of DS/CDMA system using chaotic complex spreading sequence. IEEE Transactions on wireless communications. 2005;4(3):984–9.
  83. 83. Sarma A, Sarma KK, Matorakis N. Orthogonal Chaotic Sequence for Use in Wireless Channels. International Journal Of Computers, Communications and Control. 2015;9:21–9.
  84. 84. Reid RC, Victor J, Shapley R. The use of m-sequences in the analysis of visual neurons: linear receptive field properties. Visual neuroscience. 1997;14(6):1015–27. pmid:9447685
  85. 85. Buracas GT, Boynton GM. Efficient design of event-related fMRI experiments using M-sequences. Neuroimage. 2002;16(3):801–13.
  86. 86. Abel A, Beder A, Kerber K, Schwarz W, editors. Chaotic codes for CDMA application. Proc ECCTD; 1997.
  87. 87. Costantino R, Desharnais R, Cushing J, Dennis B. Chaotic dynamics in an insect population. Science. 1997;275(5298):389–91. pmid:8994036
  88. 88. May RM. Simple mathematical models with very complicated dynamics. Nature. 1976;261(5560):459. pmid:934280
  89. 89. Shahid A, Wilkinson K, Marcu S, Shapiro CM. Visual analogue scale to evaluate fatigue severity (VAS-F). STOP, THAT and one hundred other sleep scales: Springer; 2011. p. 399–402.
  90. 90. Keihani A, Shirzhiyan Z, Farahi M, Shamsi E, Mahnam A, Makkiabadi B, et al. Use of sine shaped high-frequency rhythmic visual stimuli patterns for SSVEP response analysis and fatigue rate evaluation in normal subjects. Frontiers in Human Neuroscience. 2018;12.
  91. 91. Lin Z, Zhang C, Wu W, Gao X. Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE transactions on biomedical engineering. 2006;53(12):2610–4.
  92. 92. Van Veen BD, Buckley KM. Beamforming: A versatile approach to spatial filtering. IEEE assp magazine. 1988;5(2):4–24.
  93. 93. Van Veen BD, Van Drongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Transactions on biomedical engineering. 1997;44(9):867–80. pmid:9282479
  94. 94. Treder MS, Porbadnigk AK, Avarvand FS, Müller K-R, Blankertz B. The LDA beamformer: optimal estimation of ERP source time series using linear discriminant analysis. Neuroimage. 2016;129:279–91. pmid:26804780
  95. 95. Van Vliet M, Chumerin N, De Deyne S, Wiersema JR, Fias W, Storms G, et al. Single-trial erp component analysis using a spatiotemporal lcmv beamformer. IEEE Transactions on Biomedical Engineering. 2016;63(1):55–66. pmid:26285053
  96. 96. Wittevrongel B, Van Hulle MM. Faster p300 classifier training using spatiotemporal beamforming. International journal of neural systems. 2016;26(03):1650014.
  97. 97. Wittevrongel B, Van Hulle MM, editors. Hierarchical online ssvep spelling achieved with spatiotemporal beamforming. Statistical Signal Processing Workshop (SSP), 2016 IEEE; 2016: IEEE.
  98. 98. Wittevrongel B, Van Hulle MM. Frequency-and phase encoded ssvep using spatiotemporal beamforming. PloS one. 2016;11(8):e0159988. pmid:27486801
  99. 99. Wittevrongel B, Van Hulle MM. Spatiotemporal beamforming: A transparent and unified decoding approach to synchronous visual Brain-Computer Interfacing. Frontiers in neuroscience. 2017;11:630. pmid:29187809
  100. 100. Kubler A, Mushahwar V, Hochberg LR, Donoghue JP. BCI meeting 2005-workshop on clinical issues and applications. IEEE Transactions on neural systems and rehabilitation engineering. 2006;14(2):131–4. pmid:16792277
  101. 101. Fisher RS, Harding G, Erba G, Barkley GL, Wilkins A. Photic‐and pattern‐induced seizures: a review for the Epilepsy Foundation of America Working Group. Epilepsia. 2005;46(9):1426–41. pmid:16146439
  102. 102. Isherwood ZJ, Schira MM, Spehar B. The tuning of human visual cortex to variations in the 1/fα amplitude spectra and fractal properties of synthetic noise images. Neuroimage. 2017;146:642–57. pmid:27742601
  103. 103. Ellemberg D, Hansen BC, Johnson A. The developing visual system is not optimally sensitive to the spatial statistics of natural images. Vision Research. 2012;67:1–7. pmid:22766478
  104. 104. Tan Z, Yao H. The spatiotemporal frequency tuning of LGN receptive field facilitates neural discrimination of natural stimuli. Journal of Neuroscience. 2009;29(36):11409–16. pmid:19741147
  105. 105. Atick JJ, Redlich AN. What does the retina know about natural scenes? Neural computation. 1992;4(2):196–210.
  106. 106. Olshausen BA, Field DJ. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research. 1997;37(23):3311–25. pmid:9425546
  107. 107. Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412(6843):150. pmid:11449264
  108. 108. Haigh SM, Barningham L, Berntsen M, Coutts LV, Hobbs ES, Irabor J, et al. Discomfort and the cortical haemodynamic response to coloured gratings. Vision research. 2013;89:47–53. pmid:23867567
  109. 109. Bölinger D, Gollisch T. Closed-loop measurements of iso-response stimuli reveal dynamic nonlinear stimulus integration in the retina. Neuron. 2012;73(2):333–46. pmid:22284187
  110. 110. Takeshita D, Gollisch T. Nonlinear spatial integration in the receptive field surround of retinal ganglion cells. Journal of Neuroscience. 2014;34(22):7548–61. pmid:24872559
  111. 111. Godfrey KB, Swindale NV. Retinal wave behavior through activity-dependent refractory periods. PLoS Comput Biol. 2007;3(11):e245. Epub 2007/12/07. pmid:18052546; PubMed Central PMCID: PMCPMC2098868.
  112. 112. Wang Y, Wang Y. Neurons in primary visual cortex represent distribution of luminance. Physiological reports. 2016;4(18).
  113. 113. Nunez V, Shapley RM, Gordon J. Nonlinear dynamics of cortical responses to color in the human cVEP. Journal of vision. 2017;17(11):9–. pmid:28973563
  114. 114. Fokin V, Shelepin YE, Kharauzov A, Trufanov G, Sevost’yanov A, Pronin S, et al. Localization of human cortical areas activated on perception of ordered and chaotic images. Neuroscience and behavioral physiology. 2008;38(7):677–85. pmid:18720013