Method for decoding an individual's visual attention from a brainwave signal

19-08-2020 дата публикации
Номер:
KR1020200097681A
Автор:
Принадлежит:
Контакты:
Номер заявки: 70-20-102009509
Дата заявки: 06-09-2018

[1]

Other advantages and features of the techniques presented above will become apparent upon reading the detailed description given below: FIG. 1 a illustrates schematically a system for determining the focus of an individual's visual attention from EEG signals according to one example of an embodiment. 1 b Schematically illustrates a calculation device according to an embodiment of the present invention. 2 a Schematically depicts data and signals used in a system and method for determining the focus of visual attention according to an example of an embodiment. 2b - 2 e are diagrams schematically illustrating an example of a modulation signal usable in a method or system for determining the focus of visual attention, respectively. 3 Schematically illustrates aspects of a method and system for determining the focus of visual attention. 4 a Is a diagram illustrating a flowchart of a method for generating EEG signal reconstruction models according to an embodiment of the present invention. 4 b Illustrates a flowchart of a method for determining the focus of visual attention of an individual according to one example of an embodiment. 5 Illustrates an example of an animated graphical object. 6 Illustrates an example of a visual stimulus. 7 Illustrates an example of an application of a system and method for determining the focus of visual attention. Various embodiments described with reference to the drawings. Or identical elements are referenced using the same reference numbers.

[2]

The present disclosure relates to a method and system for determining the focus of visual attention of an individual from a brainwave signal.

[3]

Of the Related Art Various portable systems that can be used to record and use a brainwave signal in various fields of application can be seen. In particular, substantial evolution of the system for recording EEG signals and real time decoding EEG signals in real-time decoding of EEG signals means that new applications that are fast and reliable for use can now be envisaged.

[4]

Particular decoding techniques are based on the extraction of electrophysiological features from EEG signals that make it possible to predict a persistent relationship between brain activity and visual stimuli in an environment. The difficulty here consists in real-time EEG signals and in real time the individual identifying a particular feature of the visual stimulus belonging to the stimulus that belongs to the stimulus. Here the decoding should be robust, i.e. the individual should be able to determine the particular content that the individual is taking care so as to trigger a command corresponding to the visual stimulus with sufficient speed.

[5]

Document US8391966B2 discloses a technique for analyzing EEG signals produced by individual observation stimuli, each excitation comprising a light source for flashing at a given frequency. To identify the visual stimulus observed at a given time. Various features are created to classify EEG signals into classes corresponding to different stimuli.

[6]

EEG signals are divided, for example, into continuous segments, and the correlation coefficients between segment pairs of a given signal are calculated to produce a 1 set of features. The average correlation coefficient is calculated and compared with a threshold to determine whether the user is observing the stimulus. Furthermore, the correlation between EEG signal and stimulus can be analyzed to produce a 2 set of features: the degree of correlation with the stimulus is higher when the individual actually observes this stimulus. The coefficient of the self regression model may be calculated from the average EEG signal, the coefficients of which form 3 sets of features.

[7]

The technique assumes the preliminary classification of EEG by a plurality of feature sets associated with a threshold, search for nearest neighbors, a neural network, and the like. Thus, this technique relies on the association of used features and used classification methods.

[8]

Furthermore, this technique is limited to stimulation that takes the form of flashing light, which greatly limits the application range.

[9]

Other methods are known. Guangangangangangulation analysis methods, Journal of, IOP OP OP OP-based brain computer interface (An online online SSVVEP-based brain computer interface using canononononical correlation analysis), are 2009 used, for example, by Guangyu Bin et al. Inc. (CCA: canonical correlation analysis).

[10]

The present disclosure is provided with reference to functionality, functional units, entities, block diagrams and flowcharts that describe various embodiments of methods, systems, and programs. Each function, functional unit, and flow chart The entities and steps may be implemented by software, hardware, firmware, microcode or any combination of these technologies. When software is used, the functions, functional units, entities, or steps may be implemented by computer program instructions or software code. These instructions may be stored or transmitted on a computer-readable storage medium and/or may be executed by a computer to implement these functions, functional units, entities, or steps.

[11]

The various embodiments and aspects described below may be combined or simplified in various ways. In particular, the steps of the various methods may be repeated for each user of the respective graphical objects and/or problems in question, and steps may be inverted, executed in parallel, and/or executed by various computing entities. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.

[12]

1 a Schematically shows an example of an embodiment of the system 101 for determining the focal point of visual attention of a person from 100 a brainwave signal.

[13]

In one or more embodiments, system 100 includes a display screen 105 configured to display an animated graphical object, a device for generating a display signal, a signal processing device 110, an equipment for acquiring EEG signals, and an apparatus (120 130) for controlling the equipment (140 130).

[14]

In one or more embodiments, an apparatus (110) for generating a display signal is configured to generate a display signal to be displayed by a display screen (105). These display signals encode a plurality of visual stimuli that are intended to be presented to a user (105) by a display screen (101).

[15]

In one or more embodiments. The equipment (130) is configured to obtain EEG signals. This instrument takes the form of a headcap equipped with electrodes intended for contact with the skull of, for example, user 101. This headcap is for example a headcap produced by Biosemi on® which 64 electrodes are mounted. Electrical GeGeGeGeGeGeInc, for example. A Geooooo™ EEG device sold by (EGI)®, or other types of equipment such as those sold by Compumumumumumumumummy NeuroScan, typically calculated® between 16 and 256 electrodes, may be used. In the rest of the description, for example, it is assumed that the equipment (130) takes the form of a headcap.

[16]

In one or more embodiments, the signal processing apparatus 120 is configured to process EEG signals obtained by the head caps 130 to obtain EEG signals.

[17]

An apparatus (130) for controlling the equipment (140) in one or more embodiments is an interface between the head cap (130) and the signal processing device 120. The control device 140 is configured to control acquisition of EEG signals and obtain EEG signals obtained by head caps 130. In particular, an arrangement 130 for controlling the headcap 140 is configured to send a command to trigger the acquisition of EEG signals.

[18]

All or part of the functionality described herein with respect to the apparatus (110), signal processing unit 120, and control device 140 for generating the display signal may be performed by software and/or hardware, and may be implemented in at least one computing device comprising a data processor and at least one memory for storing data.

[19]

In one or more embodiments, each step of each device 110 , 120, 140 and the described method is implemented by one or more physically separated computing devices. , The various devices 110 , 120, 140 may be integrated into one and the same computing device. Likewise. The steps of the methods described herein may be implemented by one and the same computing device. (130) For acquiring EEG signals may also include a computing device configured to implement all or a portion of steps of the method described herein.

[20]

Each computing device has an overall architecture of a computer including one or more data memories, one or more processors, communication buses, one or more user interfaces, one or more hardware interfaces for connecting the computing device to a network or other equipment, and the like.

[21]

An example of an embodiment of this architecture is shown in FIG. 1 b. The architecture includes at least one data processor, at least one memory (181), one or more data storage media (182), and a processing unit (183) comprising at least one user interface (184) including one or more input/output devices, such as a mouse, keyboard, display, and the like 180. The data storage medium 182 includes a code instruction of the computer program 186. Such storage media 182 may be optical storage media such as compact discs (CD, CD-R or CD-RW), DVD (DVD-ROM or DVD-RW) or Blu-ray discs, flash memories, magnetic tapes or floppy disks, USB keys, SD or removable storage media such as micro SD memory cards. Memory 181 is a random access memory (RAM), read only memory (ROM), cache memory, non-volatile memory, backup memory (e.g. flash or programmable memory). The read-only memory, or any combination of these types of memory, may be used. The processing unit 180 may be any microprocessor, integrated circuit, or central processing unit that includes at least one processor based on computer hardware.

[22]

2 a Schematically shows data and signals used in a system and method for determining the focus of visual interest. The ideas will now be described.

[23]

In one or more embodiments, a plurality of graphical objects O1, O2, ON, intended to be presented to the user 101, are used. Each of these graphical objects can be alphabetic characters, logos, images, text, elements of the user interface menu, user interface buttons, avatars, 3D objects, and the like. Each of these graphical objects may be coded via a bitmap or vector image.

[24]

In one or more embodiments, one or more basic transforms T1, T2, TP are defined to be applied to graphical objects O1, O2, ON. The basic transformation may be a change in light, contrast change, colorimetric transformation, geometric deformation, rotation, vibration, motion along a planar or 3-dimensional path, a change in shape, or change of a graphical object. For example, a change of a graphical object may correspond to a transformation that replaces a character, e.g. A, with another character, replacing a number with another number, replacing a logo with another graphical object of the same category, for example. The basic transformation may also be a combination of the aforementioned plurality of elementary transformations.

[25]

Each of these elementary changes can be parameterized by at least one application parameter.

[26]

In one or more embodiments. The application parameter defines the degree of translation of the elementary transformation at a preset scale. For example, a scale between 0 and 100 or -100 and +100 may be used.

[27]

For example, when the basic transformation is a change in light, this intensity variation may be a varying degree of translation between 0 and 100, a degree of translation equal 0 means that the image coding for the graphic object is not modified, and a conversion degree such as 100 indicating that the image is black or otherwise black. When the degree of conversion is changed between 0 and 100, an image flickering effect is obtained.

[28]

To another example, when the basic transformation is a contrast change, the contrast change can be applied to the degree of translation between 0 and 100, the degree of translation as meaning 0 indicating that the image coding for the graphic object is not modified, and the degree of translation, such as 100, indicating that the contrast of the image is maximum.

[29]

, In morphing types of geometric transformation, the degree of translation may correspond to the degree of morphing. In the case of rotation, the conversion angle may correspond to a rotational angle. In the case of vibration, the degree of translation may correspond to the oscillation speed and/or amplitude. For movement on the path, the degree of translation may correspond to the distance traveled and/or the speed of travel on the path. For a change in shape, the degree of translation may correspond to a speed and/or amplitude that reflects the passage of one shape to another.

[30]

For each graphics object O1, O2, ON. The corresponding modulation signals SM1, SM2, SMN are generated. The modulation signal serves to define variations as a function of time in one or more application parameters of the basic transformation applied to the graphical object in question. For example, the degree di (t) at time t is defined by the amplitude SMi (t) of the modulation signal SMi at time t.

[31]

In one or more embodiments, animation graphics objects (OA1, OA2, OAN) are generated for each corresponding graphical object (O1, O2, ON) from one or more corresponding elementary transforms and corresponding modulation signals. The animation graphic objects OA1, OA2, OAN generated in this way are presented on the display screen 105.

[32]

In one or more embodiments, the visual stimulus is an animated graphical object OAAi obtained by applying a corresponding modulation signal SMi to a graphics object Oi corresponding to the temporal succession STi of the parameterized basic transform, an integer i comprised between 1 and N. , In each time (tz) of discrete (discrete) sequence times (t0, t1, tz,) in time interval [tmin, tmax], a modified graphic object OAAi (tz) is generated by applying a corresponding basic transform Ti corresponding to the amplitude (SMi (tz)) to the graphic object Oi. Thus, the animation graphics object corresponds to the temporal succession of the modified graphical object OAAi (tz) when tz (tmin, tmax) changes in the time interval [tmin, tmax].

[33]

In one or more embodiments EEG signals represented by EX E1, E2, EX are obtained by the equipment (130) for acquiring EEG signals. The EEG signals E1, E2, EX form a reconstructed modulation signal SMR.

[34]

In one or more embodiments, the reconstructed modulation signal SMR is generated by applying the reconstruction model MR to the signals E1, E2, EX. The parameters of the reconstruction model MR are represented by P1, P2, PK.

[35]

The reconstruction model MR may be a linear model, and the reconstruction model MR may be a linear model. The reconstructed modulation signal is a linear combination of signals E1, E2, EX.

[36]

Other more sophisticated models, in particular models based on neural networks, may be used. In one or more embodiments, the modulated signal is composed of a fundamental signal. These fundamental signals may be square-wave signals, triangular wave signals, sinusoidal signals, and the like, which sinusoidal signal may be a sinusoidal signal. The modulated signal may have a different duration. In one or more embodiments, the modulation signal is periodic and the time pattern is periodically regenerated by each modulation signal. For example, the modulation signal has a periodic time pattern repeated at a frequency comprised between 2 and 20Hz, which is sampled at a sampling frequency corresponding to the refresh frequency (typically higher than 60 Hz) of the screen on which the visual stimulus generated from the modulation signal is displayed.

[37]

The amplitude of the modulation signal SMi serves to define the degree of translation. The relationship between the amplitude and the degree of transform SMi of the modulation signal SMi may or may not be linear. The amplitude of the modulation signal SMi may vary between the 1 minimum value (corresponding to the th 2-th conversion) and the maximum value (corresponding to the th-th conversion).

[38]

In one or more embodiments, the modulation signal is in this case paired in pairs: The modulation signals are configured to have a minimum (e.g. 0) or less than a given threshold (SC1) in the time and/or frequency domain between any two individual modulation signals. The dependence between the two signals can be quantified by the conduction of statistical dependencies. Statistical dependency between 2 modulation signals can be calculated in the time domain, for example through temporal correlation coefficients and/or through the degree of spectral coherence, e.g. spectral coherence.

[39]

In one or more embodiments. For each pair of modulated signals corresponding to individual visual stimuli, the modulated signal decorrelate temporally.

[40]

In one or more embodiments, statistical dependencies can be calculated for each pair of modulation signals corresponding to individual visual stimuli, and then the overall statistical dependence of all pairs of modulation signals can be calculated. The modulation signal is determined by minimizing this overall statistical dependence or retrieving a modulation signal from which a total statistical dependence of less than a preset threshold (SC1) can be obtained.

[41]

In one or more embodiments, the statistical dependence computed for each pair of modulated signals corresponding to individual visual stimuli is 0 or less (SC1).

[42]

In one or more embodiments, the statistical dependence between the two modulation signals can be calculated with the temporal correlation coefficient between them, where the correlation coefficient ρ (X, Y) between the two signals X and Y can be obtained using Pearearearearson's formula.

[43]

[44]

Here. Is the expected value of the signal and σ is the standard deviation thereof. The temporal correlation coefficient may include a value between 0 and 1, and 0 corresponds to a temporally inversely correlated signal.

[45]

Statistical dependency between modulation signals may be determined through other mathematical criteria, such as Spearpearpearpearpearl's correlation coefficient or correlation coefficient Kendendendendendure, or replaced with a measure of statistical dependencies such as mutual information.

[46]

The degree of spectral coherence between the two signals x (t) and y (((aopipelineao x) is an actual value function that may be defined, for example, by the following ratio.

[47]

| Gxy xy xy xy n_AO7CX0AO.sub.2 /(GxxxxxxxxxGyy (f))

[48]

Here. Gxy (f) is the cross-spectral density between x and y, and Gxxx (f) and Gyyy (f) are power spectral density of x and y, respectively.

[49]

In one or more embodiments, the statistical dependence is computed over a period of a reconstruction window (step 414) and/or a period of a decoding window (step 415) over a reference period.

[50]

If the overall statistical dependency calculated for all pairs of modulation signals corresponding to the individual visual stimulus is 0 (e.g. in the case of a temporally inversely correlated signal) or lower than a threshold 0.2, which is equal to 20% (i.e. when this degree is expressed as a percentage, respectively SC1), effective identification between the modulated signals is possible. The lower the overall statistical dependency, the more likely the identification of the visual stimulus observed by the individual is easy and more effective. Of the set of modulation signals of the modulation signal that serves to generate the visual stimulus observed by the object, the probability of the identification error (corresponding to the percentage of the visual stimulus identified in step 415) is also correspondingly low. The threshold value SC1 may depend on the choice of the modulation signal type. In practice, it is possible to set the maximum identification error rate and adjust the modulation signal so that it remains below the maximum identification error rate. If reconstruction (see step 414) is ideal (i.e. the reconstructed signal is identical to one of the modulation signals SMi) and the modulation signals SMi are entirely dependent, it is impossible to select one modulation signal SMi from the other. The quality of decoding (refer to step 415) will be understood to depend on the statistical dependence between the modulation signals.

[51]

2b, 2c, 2d, and FIG. 2e each schematically show a set of available modulation signals in a system for determining a focus of visual attention to generate a visual stimulus.

[52]

2 b Shows an example set of two modulation signals of signals 1 to 10 signal 10, which 1 are temporally inversely correlated pairs, of the two modulation signals of the temporally decorrelated pair. Since these 10 signals are periodic sinusoidal signals having different frequencies varying between 0.2 Hz and 1 Hz at step 2 Hz, any 1 signals in this set 2 do not have the same frequency. The phase of this signal is not critical. In the example shown in FIG. 2 b, the amplitude of these signals varies between 0% and 100%, which means that the corresponding conversion precision varies between a minimum and a maximum. For all modulation signal pairs 10 of this two signals, the degree of spectral overlap is 0, the maximum temporal correlation coefficient in all pairs of modulation signals is 0.2, and this temporal correlation coefficient is calculated in the correlation window of 4.

[53]

2 c Shows an example set of the 1 modulation signals of a temporally inversely correlated pair of signals 10 2 to signal 10. Since these 10 signals are periodic sinusoidal signals having different frequencies varying between 0.2 Hz and 1 Hz at step 2 Hz, any 2 signals in this set 2 do not have the same frequency. The phase of this signal is not critical. As shown 2 b, FIGS. The amplitude of these signals varies between 0% and 100%, which means that the corresponding conversion precision varies between a minimum and a maximum. If a particular harmonic (harmonic) component is common but at all the modulation signal pairs is 0.17, the degree of spectral overlap may not 10 for all the modulation signal pairs of this 0 signal set, which is computed in the correlation window of 4.

[54]

Also shown in FIG. 2 d is an example set 1 of 10 modulation signals of signal 10 through signal 3 that are inverted in pairs in time. These 10 signals are periodic signals having the same period (referred to in FIG. 2 d), and the time pattern of any 3 signals in the set 2 is composed of a fundamental square wave signal so that the time pattern of any two signals is different from the reference period. In this case, the phase of each signal is important in that the temporal correlation coefficient should be adjusted to limit the statistical dependence to the maximum value for each individual modulation signal pair selected from 10 modulation signal sets. As in FIG. 2 b, the amplitude of these signals varies between 0% and 100%, which means that the corresponding conversion varies between a minimum and a maximum.

[55]

2 e Shows an example set of the 1 modulation signals of signal 9 through signal 9 3 decorrelated in time in time. These signals are periodic signals having the same period (referred to as the reference period in FIG. 2 e). Each modulated signal comprises a time pattern composed of a shorter square wave pulse of 100% amplitude and a longer duration of a 0% amplitude. The square wave pulses of the various visual modulation signals are temporally offset relative to the other modulation signals, at a given time one single modulation signal has an amplitude of 100% while the other modulation signal has an amplitude of 0%. These modulated signals all have the same time pattern, the time correlation coefficient adjusting the phase of each signal, and thus the statistical dependence can be controlled between the two signals. When the conversion function used is a brightness changing function, the graphical object is indicated when the modulation signal is 100% 0 0%, and the acquired animation graphics object obtained from these modulation signals and the elementary transformations is flashed by appearing and disappearing in a given order such that a single visual stimulus appears at a given time. For every modulation signal pair of 10 signal sets, the time correlation coefficient is 0.

[56]

Thus, a temporally inversely correlated modulation signal can be obtained using different frequencies, phases or time patterns for each pair of modulation signal pairs, for example.

[57]

- Although they are configured in the same periodic time pattern, have different frequencies and therefore have signals having different periods (for FIG. 2b and FIG. 2 c) - out of concern.

[58]

- Each time pattern has a signal configured with a periodic other time pattern having a particular phase specific to each time pattern (in the case of FIG. 2 d).

[59]

- Although they are configured with the same periodic time pattern having the same period, the pattern has a phase shifted signal with respect to one another (in the case of FIG. 2 E).

[60]

3 Schematically illustrates aspects of a method and system for determining a focus of visual interest.

[61]

In one or more embodiments. The reconstructed modulation signal SMR is compared with each modulation signal (SM1, SM2, SMN) to find a modulation signal with a maximum statistical dependence. For example, if the statistical dependency on the modulation signal SM4 is maximum, this means that the visual interest of the individual is concentrated on the visual stimulus SM4 generated from the modulation signal OA4.

[62]

In the example shown in FIG. 3, the visual stimulus is a number between 0 and 9, and the visual attention of an individual is centered on the numerical OA4 corresponding to the visual stimulus 4. The maximum degree of statistical dependencies can be ascertained with the corresponding modulation signal SM4.

[63]

An example of an embodiment of a method for generating a reconstruction model MR is shown schematically in FIG. 4 a. Although the steps of the method are presented sequentially, at least one particular step may be omitted, or in fact be executed in another order, or may be executed in parallel or even in combination to form only a single step.

[64]

At step 401, the test i (i ∈ [1; N]) is performed with visual stimulus generated from the modulation signal SMi: the visual stimulus is presented on the display screen, and the individual is invited to observe visual stimulus, i.e. to inset its visual attention for this visual stimulus. Each visual stimulus is an animated graphical object obtained by applying temporal continuous basis transformation of a temporally parameterized basic transform by a corresponding modulation signal to a graphical object. The test EEG signal Ei, j is recorded while the individual is paying attention to the visual stimulus in question, so i is an index identifying the test, and the corresponding modulation signal j is an index identifying the recorded EEG channel. Each EEG signal Ei, j is composed of a plurality of EEG segments Ei, j, k. Here k is an index identifying the segment.

[65]

In the implementation of step 401, the flickering is displayed on the screen at a slightly different frequency 0, each of which takes the form of a flickering number (a numerical 9 through 10). An individual is equipped EEG headcaps and blinks on 10 different frequencies. A series of tests are performed. Each test lasts for example for about 10 seconds and the interval between the two tests is, for example 1 or 2 seconds. In each test, the individual is instructed to initialize one of the numbers and ignore other numbers until the next test. Thus, the individual switches attention from one stimulus to the next, and generates EEG signals E1, E2, EX at different frequencies depending on the focal point of the visual interest.

[66]

In an embodiment of 1, EEF segments timestamp are time stamps in step 401. The modulated signal is also time stamped in step 401. The time stamp may be performed using any method.

[67]

In this Example 1, the clock of the acquisition equipment 130 time stamp EEG segments and time code t for each EEG segment.i It is used to generate '.' The clock of the control device 140 is used to time stamp the modulation signal and generate a time code whenever a preset event occurs at the stimulus.

[68]

In a 2 variant, an additional EEG channel is used, through which a short electrical pulse of known amplitude is transmitted every time a predetermined event occurs in the simulation. This additional EEG channel containing short pulses is stored with EEG data segments. This additional EEG channel containing short pulses is stored with EEG data segments.

[69]

Step 401 is repeated multiple times for each visual stimulus of the plurality of visual stimuli. Record the corresponding EEG signal produced by the individual when looking at the visual stimulus in question.

[70]

At step 402 EEG segments Ei, j, k and the modulation signal SMi are aligned in time. This synchronization can be performed using any method.

[71]

This synchronization may use EEG segments and modulated signals, or double time stamps of actual additional EEG channels.

[72]

When a time code is generated using two different clocks, it is necessary to correct these values to obtain a time stamp that is substantially generated by the same reference clock in order to correct potential temporal drift between clocks. For example, when a clock of the control device is used as a reference clock, the time code of the time stamped EEG segment Ei, j, k generated by the clock of the acquisition device is the time code t of the time stamped EEG segment Ei, j, k generated by the clock of the acquisition device.i '' Is time code t t.i The reference clock is re-synchronized to obtain a reference clock. By associating this corrected time code of EEG segments with the time code generated for the modulation signal, alignment between EEG segments Ei, j, k and signal SMi can be achieved.

[73]

The difference between the reference clock t of control device 140 and the reference clock t of equipment 130 for obtaining EEG data t'is modeled by a linear equation: diff ff ff ff=a * (t'''0 ) + b=t '- T, where a is the drift between two clocks b and b is t' ''0 The offset is offset. These coefficients a, b. To estimate b, a series x-point (t', diff (t')) is obtained before step 401, and the coefficients a, b are estimated using the least squares method. To compensate any change in time it takes to execute the command and to transfer data between the control device 140 and the acquisition equipment 130, each point t', diff (t') is n time code (t ') by means of a control device 140.k Transmitting the time code to the acquisition equipment 130, and the time code t t is transmitted continuously.k Time code (t) each time a time code (T) is receivedK) Creates a. The coefficients a and b, t', diff (t') maintained for the calculation of the coefficients a and b tk ' - tk tk ', Diff ff ffk ' - tk Corresponds to. Coefficients a, b are obtained, time code ti ' Is corrected as follows.

[74]

ti =ti ' - A * (t)i ' - t0 B - b

[75]

However, the time stamping and synchronization steps are optional and are not particularly needed when the acquisition equipment 130 and the control device use the same clock.

[76]

In step 403 EEG segments Ei, j, k are concatenated to generate EEG signals Ei, j.

[77]

In step 404, preprocessing and noise cancellation may be applied to EEG signals to optimize the signal/noise ratio. In particular, EEG signals may be significantly contaminated, for example, by artificial water of or out of brain origin, such as electrical artifacts (e.g. 50Hz, Europe's mains grid current frequencies) or biological artifacts. In one or more embodiments, signals E1, E2, EX are thus noise removed prior to generation of the reconstructed modulation signal SMR. This noise reduction can be configured simply by filtering all frequencies higher than 40 Hz to remove the electrical noise generated by the high frequency, e.g. main grid, in signals E1, E2, EX. Mululululululate statistical approach In particular, principal component analysis (PCA), independent component analysis (ICA), and static correlation analysis (CCA) may be used, allowing useful components of EEG signals to be separated from the irrelevant components.

[78]

At step 405, a parameter of the reconstruction model is determined. This determination may be performed to minimize the reconstruction error. For example EEG signal may be a combination of EEG signals. These combination parameters are determined using an optimum value for a combination parameter, i.e. a method for solving mathematical equations to determine a value applying the reconstruction model to a plurality of test EEG signals Ei, j recorded for visual stimuli, so that a modulated signal corresponding to the visual stimulus in question can be generated, i.e. a reconstructed modulation signal close to the minimum value of reconstruction.

[79]

Determining a value of a combination parameter in one or more embodiments. α.j ) Can be fixed. In other embodiments, these values can be adjusted in real time to account for potential adaptation of brain activity of the user 101, or fluctuations in the signal/noise ratio EEG signal during the recording session.

[80]

The reconstruction model MR may be a linear model that generates a modulation signal through a linear combination of signals Ei, j. Α In this case, the combination parameter is the parameter α of the linear combination.j And mathematical equations are linear equations of the following type.

[81]

i ∈ [1; N], SMi i=

[82]

Other more sophisticated models, particularly models based on neural networks, can be used, and the modulation signals are obtained by applying a multistage nonlinear mathematical operation to signals Ei, j. For example, the followings may be mentioned. Siameamese networks generate dissimilar 1-dimensional signals E1 and E2 when an individual's attention is focused on the same animated graphical object such that any EEG signal E corresponds to 2-D time signal R so that the person's attention is focused on the same animated graphical object and the person's interest focuses on the two animated graphical objects 1 R1 R2. The similarity concept between the two signals is defined in a mathematical sense and corresponds to a function that quantifies the similarity between two objects (e.g. @ururur@ https:en. Wikkkkedia. Org). A plurality of mathematical areas of similarity, for example the reciprocal of the distance or even 'cosine similarity', may be used (e.g. @ururur@ https:en. Wikkkkedia. Org

[83]

The reconstructed modulation signal is 1-dimensional signal R produced by neural network from the newly acquired EEG sample E.

[84]

In one or more embodiments, the step of generating the reconstruction model is implemented by the system (1 a) according to FIG. 100, for example the signal processing device 120.

[85]

An example of an embodiment of a method for determining a focus of visual interest of an individual is shown schematically in FIG. 4 b. Although the steps of the method are presented sequentially, at least some of these steps may be omitted or may actually be executed in another order, or indeed may be combined in parallel or even in order to form only a single step.

[86]

In one or more embodiments, the step of determining the focus of visual interest is implemented by the system (1 a) according to FIG. 100, for example, the signal processing device 120 and the device (110) for generating the display signal.

[87]

At step 411, the process ends. An apparatus (110) for generating a display signal is configured to generate a plurality of visual stimuli from a plurality of graphics objects (O1, O2, and ON) from a plurality of elementary transforms (T1, T2, TP) and a plurality of modulated signals (SM1, SM2, SMN). The visual stimulus (SMi) is an animated graphical object OAAi obtained by applying a temporal sequence STi of a temporally parameterized basic transform to a corresponding graphical object Oi, i being comprised between 1 and N.

[88]

In one or more embodiments, the visual stimulus, the modulation signal, and the number N of graphical objects are equal to 1.

[89]

In one or more embodiments, the number P of the elementary transformation is equal 1. Each elementary transformation of the temporal sequence STi of the elementary transformation can correspond to a given base transformation, whose application parameters change over time.

[90]

Over time, individuals may deliver visual attention from one animation graphic object to another. During this time, at step 412, the brainwave signals E1, E2, Ej, EX) generated by the individual are recorded by the acquisition equipment 130.

[91]

In one or more embodiments, the signal processing device 120 is configured to capture one of the visual stimuli OAAi to obtain a plurality of brainwave signals E1, E2, Ej, EX) generated by the individual.

[92]

At step 413, the brain wave signals E1, E2, EEj, EX are pre-treated and noise-removed to improve the reliability of the method of determining the focus of visual interest. The preprocessing may consist in synchronizing segments of the brainwave signals E1, E2, EEj, EX with reference to step 402, concatenating segments of the brainwave signals as described above in relation to step 403 and/or removing the brainwave signals as described above in relation to step 404.

[93]

In one or more embodiments. The signal processing device 120 is configured to reconstruct the modulated signal from the plurality of brainwave signals E1, E2, Ej, EX in step 414 to obtain a reconstructed modulation signal SMR.

[94]

In one or more embodiments, the signal processing device 120 is configured to reconstruct the modulation signal and generate a reconstructed modulated signal SMR from the plurality of brainwave signals obtained in steps 413 or 412. In one or more embodiments, reconstruction is performed by applying a reconstruction model to the plurality of brainwave signals obtained in steps 413 or 412. This reconstruction may be performed in a given movement time window, referred to here as a reconstruction window, and may be repeated periodically for each temporal location of the reconstruction window.

[95]

For example, if the reconstruction model MR is a linear model that produces a modulation signal via a linear combination of signals E1, E2, Ej, EX, the combination parameter is a parameter of the linear combination obtained in step 405. α.j The reconstructed modulation signal SMR is calculated via a linear combination of signals E1, E2, Ej, EX.

[96]

SMR MR=

[97]

In one or more embodiments, signal processing device 120 is configured to calculate a statistical dependence between a modulated signal reconstructed in step 415 and a respective modulation signal of a modulation signal set, and identify at least one visual stimulus corresponding to a modulation signal higher than a threshold (0.2) of a value comprised between 0.3 and SC2, for example. The fact that the statistical dependence identifies at least one visual stimulus corresponding to the modulation signal with a threshold SC2 is preferentially provided to this visual stimulus. This means that such one or more visual stimuli appear in the area of the display screen observed by the individual. , This identification can be used to detect a display change, and/or detect that a change in focus of visual attention has occurred. Statistical dependencies can be determined as described above in this document. The statistical dependence is for example the temporal correlation coefficient between the reconstructed modulation signal and the modulation signal of the modulation signal set.

[98]

In one or more embodiments, the visual stimuli, modulation signals, and the number (N) of graphical objects are greater than 1, and the signal processing device 120 SM1 is further configured to identify a modulation signal (SMi SM2) having a maximum statistical dependence with the reconstructed modulation signal SMR, and to identify a visual stimulus OAAi corresponding to the maximum modulation signal SMi, at step 415. The visual attention is the priority that is focused on the identified visual stimulus OAAi. The search is performed by calculating the statistical dependencies between the reconstructed modulation signal SMR and each signal of the plurality of modulation signals SM1, SM2, SMN, for example. This decoding step may be performed in a given mobile time window, referred to herein as a decoding window, and may be repeated periodically for each temporal location of the decoding window. The duration of the decoding window may be equal to the duration of the reconstruction window.

[99]

In one or more embodiments, if the visual stimulus, modulation signal, and the number (N) of graphical objects are strictly 1, one or more visual stimuli may be displayed on the display screen 105 at a given time. , Nonetheless. The decoding step 415 may be the same regardless of the number of visual stimuli displayed at a given time, and the statistical dependencies may be determined with any of the modulation signals SM1, SM2, SMN that correspond to visual stimuli that may be displayed. , It is not necessary to dynamically modify and synchronize the processing operations performed in the decoding step 415 with respect to the change in the content being actually displayed. This may be very useful when visual stimuli are incorporated into the video or when the visual stimulus is dynamically modified by interacting with the user interface.

[100]

In one exemplary embodiment 0 visual stimuli that take the form of a blinking number (numbers in the 9 range 10) are displayed on the screen, each flickering at a slightly different frequency or the same frequency but alternatingly on the screen. An individual is equipped EEG headphones, and 10 numbers see a flickering display screen at different frequencies.

[101]

In one embodiment, during determining the focus of visual interest, for example perturbations between two or more visual stimuli due to the movement of the user, and/or perturbation of the recorded EEG signal, it is possible to temporally modify the modulation signal of the visual stimulus. Such modifications may be performed, for example, to display only an ambiguous visual stimulus, and/or to modify the modulation signal of the ambiguity visual stimulus.

[102]

The frequency and/or total visibility duration and/or the degree of distortion of the ambiguity is increased to increase the degree of deformation of the visual stimulus with different frequencies and/or total visibility durations and/or ambiguity. The modification of the modulation signal may consist of modifying the frequency or time pattern of the modulation signal. The deformation can also be configured permuting the modulation signals between each other without changing their temporal patterns or their frequencies. This permutation may be random. Such permutations allow the visual stimulus to appear and disappear on the display screen in a given order (see Example 2 e), e.g. randomly, by modifying the order of appearance of the visual stimulus, causing ambiguity stimulus to appear more often. The permutations may be combined with the modification of the modulation signal to increase the degree of deformation of the frequency and/or visibility total duration and/or obscured visual stimulus.

[103]

The signal processing device 120 allows the visual stimulus to be automatically identified without information other than EEG. The reconstructed model allows a reconstructed modulation signal to be generated from the original EEG. The modulated signal correlates with a modulation signal corresponding to the various animation graphics objects, wherein the observed visual stimulus corresponds to a modulation signal having a maximum statistical dependence.

[104]

The tests obtained by grouping the results of a plurality of individuals have shown E1 that the method of determining the observed visual stimulus is very powerful (e.g. less than 5 minute) or even a few seconds (e.g. less than about 5 seconds) or a few seconds (E2 for example less than one second of error rates of EX 10%=1 for EX), and several stimulation types.

[105]

A method of determining the focus of visual attention as well as a number of man-machine interfaces. For example, it is applicable to a full alphanumeric keyboard comprising 26 or more characters.

[106]

5 Shows another example of a man-machine interface in which the method can be applied. The dynamic stimulus in this example consists of a logo or icon applied animation by applying motion to move the logo itself in a plane or 3-dimensional space, rather than basic deformation that affects the intensity of light of the logo. These movements are oscillatory or rotated at various frequencies that are able to decode, for example, in real time. In this case, the amplitude of the corresponding modulation signal represents the degree of conversion at a given time, i.e. the angle of rotation to be applied at a given time. These movements are periodic, for example.

[107]

In FIG. 5, 12 logos APP1 - APP12 are shown, which are arranged in a grid of 4 × 3 logs. Although this figure is shown as black, logo color is also possible. In the example of FIG. 5, logo APP8, as shown in FIG. 5, rotates relative to themselves at various frequencies. These logos are animated with rotational vibrations to themselves occurring at different frequencies and other frequencies. The periodic rotation applied to the icon induces a brain response that can be detected in EEG signals at specific frequencies and can be decoded in real time by the technique. This type of interface is very flexible and can be used in particular for smartphones or tablets. Such a human-machine interface may create a graphical interface for any type of computing device, e.g. a mobile terminal or a software application on a computer, whether the display screen is a touch screen or not.

[108]

6 Illustrates another example of a human machine interface that can be used. Facilitate the focus of an individual's visual attention to visual stimuli. In order to minimize the impact of neighboring visual stimuli to EEG signals crowding may be required. This technique consists in surrounding each visual stimulus with an optional animation side mask that reduces visual perturbations associated with the animation of neighboring visual stimuli, allowing the individual to more effectively focus his interest in one stimulus, and thus the decoding can be improved.

[109]

7 Illustrates another example of a man-machine interface in which feedback for visual stimuli identified as observed by the user is presented to the user. The human-machine interface consists of numbers from 0 to 9. In the example of FIG. 7, the user observes the number 6 and the feedback is configured to magnify the number identified as being observed by applying a method for determining the focus of visual attention in accordance with the present description. The feedback provided to the user may be made to highlight the visual stimulus identified by, for example, increasing the brightness, making a flash, changing the position, changing the size, changing the color, or the like.

[110]

In one or more embodiments, the visual stimulus forms part of a human-machine interface of the software application or computing device and is also transmitted to trigger execution of one or more operations associated with the identified visual stimulus after identification of the visual stimulus observed through implementation of the method for determining the focus of visual attention in accordance with the present disclosure.

[111]

In one or more embodiments. The various steps of one or more methods described herein are implemented by a software package or a computer program.

[112]

Thus, the present description relates to a software package or computer program comprising a software instruction or program code instruction readable and/or executable by a computer or data processor, which instructions are configured to command execution of a step of one or more methods described herein when the computer program is executed by a computer or data processor.

[113]

These instructions may use any programming language, and take the form of a source code, an object code, or a form intermediate between the source code and the object code. In order to implement the steps of one or more methods described in this document, these instructions are stored in a memory of a computing device or computing system, which are then executed after being loaded by a processing unit or data processor of the computing device or computing system. Some or all of these instructions may be stored locally or indefinitely on a non-volatile computer-readable medium of a local or remote storage device including one or more storage media.

[114]

The present disclosure also relates to a data carrier readable by a data processor, comprising instructions of a software package or a computer program as described above. A data medium may be any entity or device capable of storing such instructions. Examples of computer-readable media include communication media including data storage media and any medium that facilitates transfer of a computer program from one location to another. This is not limited thereto. Such storage media may be optical storage media such as compact discs (CD, CD-R or CD-RW), DVD (DVD-ROM or DVD-RW) or removable storage media such as floppy disks, USB keys, SD or micro SD memory cards, or memories such as random access memory (RAM), read only memory (ROM), cache memory, non-volatile memory, backup memory, and the like.

[115]

The description also relates to a computing device or computing system comprising means for implementing the steps of one or more methods described in this document. These means are software and/or hardware for implementing one or more method steps described in this document.

[116]

The present disclosure also relates to a computing device or computing system comprising at least one memory for storing a code instruction of a computer program for executing all or a portion of one or more of the methods described herein, and at least one data processor configured to execute such a computer program.



[117]

A method for determining a focus of a visual interest of an individual from a brainwave signal. At least one visual stimulus to be displayed is generated (411) from at least one graphical object. The visual stimulus is an animated graphical object obtained by applying a chronological sequence of parameterized basic transforms by a corresponding modulation signal to a graphical object. A modulated signal is reconstructed (414) from a plurality of brainwave signals generated by an individual that focuses the visual attention to one of the visual stimuli. The statistical dependence of the reconstructed modulation signal is identified (1 415) by a visual stimulus corresponding to the modulation signal that is higher than the first threshold.



The elementary transformation of claim 411, wherein at least one of the at least one graphics object, the at least one elementary transformation, and the at least one modulation signal set (414) comprises a step of identifying at least one visual stimulus that corresponds to a modulation signal that is higher than the first 1 threshold value (415) in order to obtain a reconstructed modulation signal from the at least one modulation signal set and the at least one modulated signal from the at least one modulation signal set 415 (S). Process.

The method according to 1, wherein the at least one visual stimulus set comprises a plurality of visual stimuli, wherein the at least one modulation signal set comprises a plurality of modulation signals, wherein the modulation signal is configured to be less than 415 threshold for all pairs of modulation signals corresponding to 415 2 individual visual stimuli and/or the overall statistical dependence determined in the frequency domain 2.

The method of claim 1 or 2, wherein the reconstruction is performed by applying a reconstruction model to the plurality of brainwave signals.

The method of claim 3, wherein the reconstruction model comprises a plurality of parameters of a combination of brainwave signals, and wherein the method includes determining a parameter value of a plurality of parameters of a combination of the brainwave signals in an initial learning phase.

The method of claim 4, wherein the method is applied to each visual stimulus of the subset of the at least one visual stimulus in an initial learning phase applied to a subset of at least one of the plurality of visual stimuli. A method comprising: obtaining a test brainwave signal generated by an individual that focuses attention to the visual stimulus in question; and determining an optimal value for a plurality of parameters of the combination of the brainwave signals so that the application of the reconstruction model to the plurality of test brainwave signals recorded for visual stimulation is generated.

Method according to one of the preceding claims, wherein each modulation signal defines a variation as a function of time in one or more application parameters of the elementary transformation for the graphical object.

The method of claim 6, wherein one application parameter is an application rate of a basic transform or a conversion degree.

A method according to any one of the preceding claims, wherein the basic transformation is a variation of a set of variations consisting of a light intensity variation, contrast change, colorimetric transformation, geometric deformation, rotation, vibration, motion along a path, change of shape and alteration of a graphical object, or a combination of selected variations from the set of variations.

A computer program comprising program code instructions for executing steps of the method of any one of the preceding claims, when the computer program is executed by a data processor.

The method according to 1 or 7, wherein the computer program comprises at least one memory for storing a code instruction of a computer program configured to execute the method and at least one data processor configured to execute the computer program.

The apparatus (410) for generating a display signal configured to generate (110) at least one graphical object, at least one elementary transformation, and at least one visual stimulus set to be displayed from the at least one modulation signal set, generates a plurality of brainwave signals generated by the individual (411). The method of claim 414, further comprising: (415) obtaining a reconstructed modulation signal by reconstructing a modulation signal from the plurality of brainwave signals, and (1) identifying at least one visual stimulus that corresponds to a modulation signal having a higher statistical dependency than the (415 120) th threshold.

The system according to 11, wherein the at least one visual stimulus set comprises a plurality of visual stimuli, and wherein the at least one modulation signal set comprises a plurality of modulation signals; and the signal processing apparatus 120 is further configured to determine (415) the visual stimulus corresponding to the modulated signal having a maximum statistical dependency. The modulation signal is further configured such that the overall statistical dependence 2 determined in the time and/or frequency domain is less than the first threshold value for all pairs of modulated signals corresponding to 2 415 individual visual stimuli.