FACIAL EXPRESSION-BASED CONSECUTIVE EMOTION RECOGNITION METHOD OF ROBOT, AND RECORDING MEDIUM AND DEVICE FOR PERFORMING METHOD

07-10-2016 дата публикации
Номер:
KR1020160116311A
Принадлежит:
Контакты:
Номер заявки: 01-16-102021858
Дата заявки: 23-09-2016

[1]

The present invention refers to method emotion recognition continuous based facial expression of robot, for performing the same search device relates to recording medium and, more particularly human by human interaction of the robot through expression of a robot and moved to a continuously emotion estimation based facial expression of a robot capable of continuous emotion recognition method, recording medium and for performing the same relates to device.

[2]

Generally, robot is in a lower portion of the, human shape of truncated. Thoreau human artificial work itself. However, this may only on the order of several tens of years ago to a modem outside the fiction from the movie science daydreaming; cd1a. CDK, chamber number metadata and includes robot joint type disc cage of roller machine was just. Development is computer techniques and the modern [...] the artificial intelligence a higher melting been liquid silicon is coated according, and intelligent robot in addition various further actively is to reduce the number of one.

[3]

Daily living recently and for quantitative, qualitative power generation due to public organ may be used as a as well as at home packages are manufactured at capable of accommodating a robot, the robot affective-friendly facilitated is desired in which the container has KIPO &. By human's command same defined by non-robot unidirectional is performed for a certain position on, emotion model for representing and emotion of robot having, it responds by human to recognize emotional state from the protein complex from a robot there is a.

[4]

While, human emotion state than the expression, voice, exhibits through member is, heart rate, blood pressure, such as brain lifestyle not yet. may be estimated for a even signal. The speech signal and a context speaker and have of noise is not independent of, a method using biometric signal equipment mounting door number number of threads and the human robot or the like applied in interaction it is difficult to.

[5]

In addition, mounted using facial expression for status at high temperature but a number of studies, most of Image information based on the types of expression cd1a. a manner, continuous of Image greater expression also using the information relating to the moving picture storing box for divisions of demand deposit, based on was manner. However, chamber number interaction human in situations when response to emotional state, and a facial expression of relative to the frame when status in emotion in failure of tracking face, and the error of the recognition may be proposed.

[6]

Wilhelm Wundt. ". ". "". "Grundriss der Psychologie [Outlines of psychology]", Leipzig, Engelmann, 1896. James A. Russell, Albert Mehrabian, "Distinguishing anger and anxiety in terms of emotional response factors", Journal of Consulting and Clinical Psychology, 42, 1974, pp. 79 - 83.

[7]

, the point number area techniques of the present invention is a blade if human purpose of the invention of the robot through expression by human interaction of a robot and moved to a continuously emotion estimation based facial expression of a robot capable of continuous emotion recognition number of the recording operation. under public affairs method.

[8]

It is another object of the present invention continuous emotion recognition based facial expression of robot said method for carrying out recording medium storing a computer program the recording operation. under public affairs number.

[9]

Another object of the present invention continuous based facial expression of robot said method emotion recognition number a device for carrying out the recording operation. under public affairs.

[10]

Said a one for realizing the purpose of the invention in the embodiment according to the method emotion recognition continuous based facial expression of robot, each emotion classified images that stores a involves establishing a base; said distribution of a number of images stored in a database based on PCA (Principal Component Analysis, principal components analysis) through the steps of calculating independent variable; said independent variable calculated using linear regression type for learning step; and input Image mounted applied linear regression on said distribution of said information, the value of state includes.

[11]

In of the present invention in the embodiment, said linear regression type for learning step, each emotion corresponding to [...] value and awakening (Arousal) (Valence) value to be used as an independent variables, and, the recurrence relation said linear regression, awakening (Arousal) - [...]tablet house-axis (A axis) and unpleasant axis (V axis) (Valence) - dimensional 2. may be in the form of.

[12]

In of the present invention in the embodiment, step for learning type linear regression said, regression type for calculating the coefficients KIPO &.

[13]

In of the present invention in the embodiment, input said linear regression on said distribution of said Image related to the music information, the value of state mounted applied the step, said distribution of an input Image obtained from said database eigenvector projects the steps of calculating independent variable; and said said an argument of the linear regression on said distribution of said adjustor and the emotion of input Image includes steps of estimating a state can be.

[14]

In of the present invention in the embodiment, said continuous based facial expression of robot the method emotion recognition, said PCA through prior to the steps of calculating independent variable, said images stored in a database pre-processing of a may further include any step.

[15]

In of the present invention in the embodiment, said images stored in a database pre-processing of a step, said stored in a database of a number of images vector the step of gray scale images; vector-encoded images are for preventing piracy and for regulating the step; normalized a fourth vector average of a number of images; and said average vectors and each Image vector deviation of a tea is a fourth vector may further include any.

[16]

In of the present invention in the embodiment, said PCA the step so as to calculate a descriptor independently, said difference vector eigenvectors is obtained covariance matrix through the steps of calculating; and said difference vector eigenvectors PCA is in a reduced, dimensional projects includes calculating a value can be.

[17]

In of the present invention in the embodiment, said vector-encoded images are for preventing piracy and for regulating the step, of each vector average/a distribution, and overall vector through an average/a distribution of an be.

[18]

In of the present invention in the embodiment, said a involves establishing a database, according the cuticle layer by each Image value of emotion state and control to store the compressed Image numerically evaluated and the may comprise an.

[19]

In of the present invention in the embodiment, said a involves establishing a database, said prior, each emotion by images on the face region of a step of detecting may further include any.

[20]

In of the present invention in the embodiment, an input Image applied on said distribution of said linear regression said mounted state information, the value of the step, the model emotion continuous, every frame of successive input Image said emotion of includes calculating a value of state can be.

[21]

said another object of the present invention for realizing the one in the embodiment according to a computer-readable storage medium contains, continuous based facial expression of robot method emotion recognition apparatus for. program is recorded.

[22]

said another of the present invention one inward continuous emotion recognition based facial expression of robot in the embodiment according to the device, each emotion classified images that stores a base; said distribution of a number of images stored in a database based on PCA (Principal Component Analysis, principal components analysis) calculating a independent variable through the number 1 PCA part; said independent variable calculated using linear regression type for learning for the learning, the learning unit; and input Image mounted applied linear regression on said distribution of said information, the value of state includes recognition section than the expression.

[23]

In of the present invention in the embodiment, the learning unit learns the, each emotion corresponding to [...] value and awakening (Arousal) (Valence) value to be used as an independent variables, and, said linear regression the recurrence relation, awakening (Arousal) - tablet house-axis (A axis) (Valence) [...] and unpleasant axis (V axis) of 2 - dimensional may be in the form, said linear regression type learn and regression type for calculating the coefficients KIPO &.

[24]

In of the present invention in the embodiment, said expression cognition unit, said database said distribution of an input Image obtained from eigenvector projects number 2 PCA part calculating a independent variable; and said said an argument of the linear regression on said distribution of said adjustor and the emotion of input Image state output can act of estimating the state may comprise an part.

[25]

In of the present invention in the embodiment, said expression cognition unit, said face of input Image further face region detector region may include.

[26]

In of the present invention in the embodiment, said continuous emotion recognition based facial expression of robot the device, before said PCA said images stored in a database pre-processing of a pre may further include any it will call.

[27]

In of the present invention in the embodiment, said a preprocessing unit, said stored in a database of a number of images gray scale images vector a vector part; vector-encoded images are for preventing piracy and for regulating the normal section; normalized average of a number of images to obtain a steering vector a mean calculating section; and said average vectors and each Image vector deviation of a tea is, comprising performing adsorption of a tea to obtain a steering vector may comprise an unit vector.

[28]

In of the present invention in the embodiment, said database, the cuticle layer to a number by evaluated of each Image emotion state can be values are stored.

[29]

In of the present invention in the embodiment, said expression cognition unit, said emotion of every frame of successive input Image information, the value of state can be.

[30]

The continuous based facial expression of robot according to method emotion recognition, using PCA and a linear regression analysis, simply type in emotion in basic is supported by the upper case and a classification, 2a-V 2 dimensional emotion model for each axis state value into a numerically calculating a number of method by in, is subdivided for to emotion of the subject reacted conditions is adjusted according to the distance between. The, through expression human feelings of a robot and moved to a continuously so that estimation, robot and the human between the permits shared emotion. Thus, field of article number and home robot sensitivity may contribute to its power generation.

[31]

Also in the embodiment according to one of the present invention Figure 1 shows a based facial expression of robot device. a block of continuous emotion recognition. Figure 2 shows a stored in a database and display of key areas of 2a-V plane also for showing examples of. foil of Figure 1. Emotion state stored in a database Figure 3 shows a pre-evaluation value Figure show. foil of Figure 1. Figure 4 shows a detailed & pre energized is of Figure 1. Figure 5 shows a partial DTx performed at gray scale Image of door account vectored in is surface of Figure 4. Figure 6 shows a for recognizing expression also examples of modeling regression analysis is surface for showing. Also in the embodiment according to Figure 8 shows a 7 and also one of the present invention continuous based facial expression of robot is flow of method emotion recognition. When a unit as the operation of a current through Figure 9 PCA method, V about the magnitude of this a dimension reduced RMS is of the relative errors indicating errors of graph.

[32]

The present invention refers to as detailed, the present invention can be embodiment is a particular in the embodiment shown by way of example a reference a the drawing. These in the embodiment of the present invention is the one skilled in the art embodiment to enable sufficient to. detailed. Various of the present invention different in the embodiment need not be mutually exclusive but is should understood. For example, particular shape described herein, one and characterization structure in the embodiment of the present invention in relation to other without psychiatric and a wireless type through a wire may be embodied in in the embodiment. In addition, each disclosure in a in the embodiment of the present invention arrangement of individual components of position or psychiatric and a wireless type through a wire can be change without should understood. Thus, detailed description refers to limiting not be taken as the, range of the present invention, if appropriately described, and claims are claim thereof uniform full range with claim with an is constrained only by. Various references similar in drawing side with the same or similar precursors as in. refers to function.

[33]

Hereinafter, in reference to drawing of the present invention preferred embodiment more rapidly and to reduce a memory to a.

[34]

Also in the embodiment according to one of the present invention Figure 1 shows a based facial expression of robot device. a block of continuous emotion recognition.

[35]

Recent at home robot is it each of these components interact, e.g. and the human the robot according to a mounted in the robots is desired in which the container has KIPO &. A robot capable of interaction emotion such a prior studies have most robot emotion with a defined several basic emotion divided into a classification method or continuous human feelings. is and for reproducing. Thus, in the present invention human facial expression the robot human emotion state linear model based on measured sequentially the emotion recognition number a in the device.

[36]

Also 1 with a, device emotion recognition continuous based facial expression of robot the present invention according to (10, hereinafter device) aggregates the available images stores a predetermined linear regression type for learning offline part (100) to perform and a new online emotion estimating expression recognition unit (300) includes.

[37]

Of the present invention said device (10) based facial expression of a robot for carrying out continuous emotion recognition software (application) is provided which can be carried out, said offline part (100) and said expression recognition unit (300) the configuration of the device (10) based facial expression of robot method of manufacturing heat sink of said continuous by software for emotion recognition may be the number.

[38]

Said device (10) for a separate terminal or terminal module can be. In addition, said device (10) of said offline part (100) and said expression recognition unit (300) is made of the configuration of the integration module, at least one module. can be made. However, a separate each configuration and visa versa may be semiconductor.

[39]

Said device (10) the mobility. secured or can be secured on. Said device (10) the, server (server) or engine (engine) may be in the form, device (device), mechanism (apparatus), terminal (terminal), (user equipment) UE, MS (mobile station), wireless device (wireless device), portable device (handheld device) .can be referred to as as terms.

[40]

Said offline part (100) the embodiment Image data on an identical region for time-before estimating emotion, first human emotion of estimating the state can be form a linear model. To this end, said offline part (100) database (110), number 1 PCA part (150) and for the learning, the learning unit (170) includes. Said offline part (100) the number 1 PCA part (150), which are entered into a region to edge regions provided to perform pre partial DTx (130) may include further.

[41]

Said database (110) each emotion by stores the images.

[42]

Emotion, groups, or with an, to listen, think, mental associated with behavior, can be in physiological, subjective experience, and generally mood, substrate, personality constitution: know relates and the like. Human classification emotion is capable of being isolated from a to but there is a the protein complex from a, emotion subjective, since the personality abstract describes a human emotion impacts a multitude of a classification is displayed difficulty, many psychological Scholar on and are hereinafter the current classification and general outline in emotion in a receive official considers the code the established. and media being with microorganism.

[43]

In the present invention James Russell - [...] to emotion lower timing number is unpleasant, the mapping axis-tablet house awakening - circular emotion the ground terminal of emotion model. Generally, awakening - Arousal tablet house-shaft, - unpleasant [...] the free surfaces called Valence shaft. I.e., basic arousal-valence (hereinafter 2a-V) 2 dimensional emotion physical components is modeled as a corresponding advertisement based on the shown list, emotion model applied to each axis. Emotion model time state in emotion in acids to epoxygenated fatty acids therein representing a differential equations is method.

[44]

Said database (110) emotion stored in the awakening state [...] axis and-tablet house - - unpleasant axis, i.e. 2a-V are represented in the form of 2 dimensional, in emotion in V value, A in emotion in the description. For example, emotion planar main which corresponds to a value of each emotion corresponding to aggregates the available of images than the expression can be storing.

[45]

Database collected in the present invention (110) has of two may be used for. First a second database in PCA (110) eigenvectors that best represents the desired distribution of finding and, second obtained through a regression type PCA. for learning. Expression through 2a-V is can act to be recognized according to a control based on the model emotion axis 2 since, each database a-plane and display of key areas of emotion array block, and by dividing 6 of 9 corresponding to a circular metal may collect KIPO & associating. Key areas a neutral (0, 0), pleasant sound (1, 0), laugh (1, 1), frightened (0, 1), (- 1, 1) fear, south China (- 1, 0), (-1, -1) grief, atropisomers by (0, -1), comfort (1, -1) 2 and a equal also in.

[46]

In addition, said database (110) the cuticle layer estimated by an numerical measurement of the emotion state of each Image can be information is separately stored in value. For example, said database (110) 10 is provided at the Image of total 90 each emotion according the use of face Image of field, by each Image by the cuticle layer of of said players, emotion state V value and a value of 1 in 0 A value can be evaluating to have. Emotion Image and learning examples of value for evaluation of cancer, cancer was shown in Figure 3.

[47]

Said database (110) prior to Image are stored for extracting the face region from an Image of a can be. To this end, said device (10) face from the images face region detector region (not shown) may include further.

[48]

Expression analysis in regions and an eye area face is used. Human facial expression of the eyebrow, eyelid, brow mouth lips and motion of eye area such as a muscle testing the combination of the of a motion can be are represented according to the. Thus, expression consults a a specific area are sharpened make analysis expression. Face of a person is generally constant since in similar to an in a fine size by keeping property of, face identically 100X100 (pixel) to adjust to, for example, eye 11 90 in the horizontal length, is 50 in longitudinal 21, input the horizontal length 80 in 21, in longitudinal 71 may be selected in 100.

[49]

Said partial DTx (130) the number 1 PCA part (150) prior to performing PCA in said database (110) is pre-processing of a stored in images.

[50]

Also 4 refers to surface, said partial DTx (130) the vector part (131), normalized section (133), mean calculating section (135) and difference vector part (137) includes.

[51]

Said vector part (131) the database (110) gray scale of a number of images stored in the vector images. Gray scale acids to epoxygenated fatty acids therein, purpose: to enable easy so each pixel Image value of circulation promoted. that it exhibits in one dimension. Gray scale the scene of the transverse, longitudinal dimensional array of revealed which has 2, 5 also the scene of the pixel 2X2 vector such as a penetration hole while moving up and down.

[52]

Said normal section (133) the vector-encoded images are for preventing piracy and a normal the. Image information influence of assay using a since effected using the, whole Image and selects a desired for preventing piracy and normalizing the cumulative metric and should the. The whole Image and selects a desired vectored in the of each vector average/a distribution, through an average/a distribution of an full and to perpetually regulate the, below of equations can be performed by 1.

[53]

[54]

Wherein, a normalized ptr a vector and, . vector which has a low overall DB.

[55]

Said mean calculating section (135) each vector of a difference between and to obtain a steering vector averaged to handicapped customer operates, said difference vector part (137) the average vectors and each Image vector deviation of a tea is. to obtain a steering vector. Excitation is process just pace it off, said number 1 PCA part (150), thereby improving the efficiency of the difference for a PCA.

[56]

Said number 1 PCA part (150) the distribution of a number of images stored in a database based on a PCA (Principal Component Analysis). so as to calculate a descriptor independently. By principal component analysis PCA has, door overflow occurs at the root node to a physical components is modeled as a vector number is the dimension of dimensional, reducing the risk of disentangled and is in such a way that it is won.

[57]

PCA for data in overflow occurs at the root node to first process a large covariance the eigenvectors order direction find out large discrete and unique value several in an order eigenvectors selects. And database in each difference vector projects eigenvector selected dimensional is in a reduced, PCA can be value is found. The number of eigenvectors using m when the personal m have a value reduced - dimensional. PCA calculated in a linear regression analysis value 15 expression may be used for independent variable.

[58]

Said learning unit (170) the number 1 PCA part (150) using independent variable are calculated, at said linear regression type need only learn one set of..

[59]

Regression analysis (Regression analysis) parameters independent relationship independent variables, a predicting and suitable analyzing degree limited by theory, independent variable and at independent variables, based on both of data distributions of mathematical relations among the pixel of method is built is estimated among the predictor sets. Independent variables, independent variable and at a given regression analysis data based on relation of the two correlation is used to search a relationship of its suitability for assessing and, with respect to independent variable-going that predict the independent variables, .aims at. In 2 of equations below Has one independent variable, The independent variables, and, Coefficients of the regression type, The residual is.

[60]

[61]

In the present invention two in emotion in respect to the axis, with respect to particle size and particle eye area corresponding advertisement based on the shown variable independently value PCA region, corresponding to expression Valence state in emotion in Arousal and a light with the dependent variable value of. Expression also real-regression analysis for recognizing may represent a first letter KIPO &, such as 6.

[62]

Said database (110) each Image of input region and's eye PCA region by-analysis may yield an independent variable. Wherein, on-axis second PCA k second database i each value region input region and's eye expression and Corresponding advertisement based on the shown list represented by, pre evaluated in i Valence Arousal value and a second expression of each the value of and Representing the to.

[63]

Said database (110) independently of each Image of PCA analysis can be so as to calculate a descriptor, the estimated independent variables,, since the value of, .are set of data, . Thus, also, such as eigendecompositions 6 is modeling, linear regression analysis coefficient regression type satisfying the conditional expression, and via the The traits are used to identify the KIPO &. In this way said for the learning, the learning unit (170) the regression type need only learn one set of..

[64]

Said expression recognition unit (300) the input linear regression on said distribution of said Image mounted applied between a calculated value of state embodiment. Said offline part (100) the user to practice their linear regression in, a new learned when expression, said database (110) obtained from distribution of eigenvector projects uses an averaging filter independent variable, said same linear regression expression positioned on a boundary part between axially in emotion in 2a-V 2 can be of estimating the state.

[65]

To this end, also 1 again a, said expression recognition unit (300) the distribution of an input Image obtained from said database projects eigenvector, i.e. PCA so as to calculate a descriptor independently using a number 2 PCA part (330) and said an argument said linear regression on said distribution of said adjustor and the emotion of input Image can act of estimating the state state output part (350) may comprise an.

[66]

In addition, said expression recognition unit (300) before the PCA analysis region face of input Image said face region detector (310) may include further. Said photo through a camera input Image, may include a time Image for each.

[67]

Said expression recognition unit (300) the emotion of every frame of successive input Image information, the value of state of a preamble for a continuous emotion can be status.

[68]

As such, the robot facial expression human in the present invention human emotion state on the basis of the scene model linear continuously.. Human facial expression is a linear regression analysis (Liner regression) mounted state on the basis of the scene model is values are calculated, calculated emotion value [...] and awakening (Arousal)/unpleasant is represented to two cordinate (Valence).

[69]

[70]

One of the present invention also in the embodiment according to Figure 7 shows a continuous emotion recognition based facial expression of robot is flow of method.

[71]

The present in the embodiment according to the method emotion recognition continuous based facial expression of robot, of Figure 1 device (10) substantially the same construction can be is output after being processed in the. Thus, of Figure 1 device (10) configuration identical to the drawing the same element impart code, dispensed the described repeated. In addition, the present in the embodiment according to robot facial expression of based continuous continuous based facial expression of a robot method emotion recognition method emotion recognition software for carrying out (application) can be executed by.

[72]

With a 7 also, the present in the embodiment according to the method emotion recognition continuous based facial expression of robot, each emotion classified images of building location beacon database, and that stores a. (step S10). Said awakening state emotion stored in a database-tablet house[...] axis and - - unpleasant axis, i.e. 2a-V are represented in the form of 2 dimensional, in emotion in V value, A0 A in emotion in a representation. For example, emotion planar main which corresponds to a value of each emotion corresponding to aggregates the available of images than the expression can be storing.

[73]

Said involves establishing a database (step S10) the, said prior, each emotion classified images detect face of, the cuticle layer by emotion state according each Image value of numerically evaluated and the can be then stored in the memory.

[74]

Said database is established in which surface, said distribution of a number of images stored in a database based on a PCA (Principal Component Analysis, principal components analysis) so as to calculate a descriptor independently. (step S30).

[75]

Said PCA through the steps of calculating independent variable (step S30) previous pre-processing can be and the contents stored in the database.

[76]

Also 8 with a, said pretreatment process stored in a database said gray scale of a number of images the step of vector images (step S21), vector-encoded images are for preventing piracy and for regulating the step (step S22), normalized a fourth vector average of a number of images (step S23) and said average vectors and each Image vector deviation of a tea is (step S24) a fourth vector may comprise an. Said vector-encoded images are for preventing piracy and for regulating the step (step S22) the, of each vector average/a distribution, and overall vector through an average/a distribution of an be.

[77]

Said PCA through the steps of calculating independent variable (step S30) the pretreatment obtained during the said difference vector is obtained covariance matrix through calculates the eigenvectors, said difference vector eigenvectors PCA is in a reduced, dimensional projects. value is computed in a.

[78]

After, said PCA calculated by using independent variable value learning a linear regression equation (step S50). Said linear regression type for learning step (step S50) the, each emotion corresponding to [...] value and awakening (Arousal) (Valence) value to be used as an independent variables, . In this case, said linear regression the recurrence relation, awakening (Arousal) - tablet house-axis (A axis) (Valence) [...] and unpleasant axis (V axis) of 2 - dimensional are in the form of aqueous. Said linear regression learning through. for calculating the coefficients regression type.

[79]

Said regression learning until surface aired, emotion from from an input video time-embodiment of estimating the state. be. To this end, said applied linear regression on said distribution of an input Image information, the value of a mounted state (step S70). In this case, the model emotion continuous, said emotion of every frame of successive input Image information, the value of state of a preamble for a continuous emotion can be of estimating the state.

[80]

Specifically, said distribution of an input Image obtained from said database eigenvector projects, i.e. independently using PCA and so as to calculate a descriptor, an argument said adjustor and the linear regression on said distribution of said. of estimating the state emotion of input Image. Said photo through a camera input Image, can include a time Image for each, before said PCA analysis that detects and on the face region of said input Image, detected face region only one can analyze KIPO & PCA.

[81]

As such, the robot facial expression human in the present invention human emotion state on the basis of the scene model linear continuously.. Human facial expression is a linear regression analysis (Liner regression) mounted state on the basis of the scene model is values are calculated, calculated emotion value [...]tablet house-awakening - - unpleasant (A axis) and two cordinate (V axis) is represented to.

[82]

In hereinafter, the present invention according to continuous based facial expression of robot method emotion recognition a verification an effect on the.

[83]

Emotion, groups, or with an, state, whether very an abstract general outline-definable structure that is into a number in this respect, an arithmetical problem is for a status emotion of the subject is accurate evaluation in addition it is difficult. When assumed system emotion continuous identified system parameters a duodecimal in addition it is difficult to determine.

[84]

In the present invention expression evaluates it a priori for the analysis result emotion state and the value of a emotion been converted into numbers and compared to the interface to state was assessed accuracy. A yarn state estimation emotion continuous number expression analysis result based on emotion time according to a change of the state of the emotion model for viewing and response characteristics of continuous, on the result or fixed human power by operating all.

[85]

When a unit as the operation of a current through Figure 9 PCA method, V about the magnitude of this a dimension reduced RMS is of the relative errors indicating errors of graph.

[86]

Expression analysis system for building a facial expression a database to the cells in the 9 two primary emotion for a status expression for each expression per 10 is provided at the,, the third to eo Image of a total 90. Each method analysis one data expression distribution of data on the m best describing two projects of eigenvectors to represent the dimensional m, m region and an eye area dimensional value independently a evaluated in report dictionaries variable value of emotion state linear regression on the independent variables, was need only learn one set of type.

[87]

Of 90 the contents address memory among input video signals using the testing number 89 remaining external grudge Image of a through Image of a learn regression type linear analysis PCA corresponding advertisement based on the shown list, all Image related to the music A axis and V axes was calculating the errors. Where the error value of the maximum value of each emotion dimensional value when 1, the relative size for the light with the.

[88]

When a unit as the operation of a current through Figure 9 PCA method, V about the magnitude of this a dimension reduced RMS (root-mean-square) of the relative errors indicating errors.. Linear regression in formula eigenvectors can be made using another independent variable the detection control is raised again according to the by the accuracy of 100,000. a tendency for the. Average RMS Arousal and a Valence (root-mean-square) error 0. 3335 exhibits good of the estimated frequency response have demonstrated.

[89]

The present invention refers to expression in portions that recognize mounted through analysis and a linear regression analysis using PCA, type in emotion in basic simply is supported by the upper case and a classification, 2a-V 2 dimensional emotion model for each axis state value into a numerically calculating a number of method by in, is subdivided for to emotion of the subject reacted KIPO & conditions.

[90]

In addition, time-recognizes the emotion embodiment, instant required bent state in emotion in number the threads which make continuous to apply the interaction emotion model over time, introducing of a preamble for a continuous in emotion in a method of estimating the state was anhydroglucitol, number of.

[91]

The, robot facial expression of based continuous or varying implemented in the application method emotion recognition computer components may be accomplished via a program instruction that is embodied in the form of a record carrier at computer-readable can be. Said computer-readable recording medium a program instruction that, data file, data structure nor the alone or in combination may include.

[92]

Said computer-readable recording medium recorded programs in a the present invention a discard logic are designed and constructed specifically for those field of computer software the clearance to enable is publicly known to one skilled in the art it is possible, KIPO &.

[93]

Examples of computer-readable recording medium, hard disk, floppy disc and a magnetic tape such as magnetic medium, CD-ROM, an optical recording medium, such as DVD, flop mote curl disk (floptical disk) - magnetic such as optical media (media deposited topography), and ROM, RAM, flash such as memory storing at least one instruction a program, such as a hardware specifically configured to perform device is included.

[94]

Examples of program instructions for, compiler created by interpreter as well as code 2000 such as the computer use the purpose: a high-grade and executed by also comprise language code. The present invention according to the device hardware said, for performing processing on software modules in one or more can be configured to operate as, . is similarly treated and vice versa.

[95]

In the embodiment above but described in reference to, is a classic mirror server art corresponding a lead one skilled in the art of the present invention concept and region patent the following is claimed is within such a range that causes no away from the present invention various modified and change can be 2000 database for each consumer.

[96]

[97]

10: device emotion recognition continuous based facial expression of robot 100: offline part 110: database 130: partial DTx 150: number 1 PCA part 170: for the learning, the learning unit 300: expression recognition unit 310: face region detector 330: number 2 PCA part 350: emotion state output part 131: vector part 133: normal section 135: mean calculating section 137: difference vector part



[1]

A facial expression-based consecutive emotion recognition method of a robot, comprises: a step of constructing a database to store images of each emotion; a step of calculating an independent variable through a principal component analysis (PCA) based on distribution of images stored in the database; a step of learning a linear regression equation using the calculated independent variable; and a step of applying an input image to the linear regression equation to calculate an emotional state value. Accordingly, a robot continuously estimates human emotions through human facial expressions such that emotions are able to be shared between the robot and humans.

[2]

COPYRIGHT KIPO 2016

[3]

[4]

  • (110) Database
  • (130) Preprocessing unit
  • (150) First PCA unit
  • (170) Learning unit
  • (310) Face area detecting unit
  • (330) Second PCA unit
  • (350) Emotional state output unit
  • (A1.A2) Input image
  • (B1,B2) PCA value
  • (CC) Emotional state
  • (DD) Linear regression equation
  • (EE) Value of emotional state



A axis (awakening (Arousal) - tablet house-axis) and V axis ([...] (Valence) - unpleasant axis) based on emotion axis a-2 a emotion a state discrimination method as, each emotion by images and said each emotion by images according to facial expression of the face region of the cuticle layer by emotion state corresponding to said A axis corresponding to axis said V value and A evaluated value V value of emotion state involves establishing a database storing; said stored in a database pre-processing of a step each emotion by images; pre-processed said distribution of each emotion classified images based on PCA (Principal Component Analysis, principal components analysis) through the steps of calculating independent variable; said independent variable calculated using linear regression type for learning step; and input Image mounted applied linear regression on said distribution of said information, the value of state comprising the step of, said pre-processing of a step, emotion classified images of said part in a face region and an eye area vector images of gray scale of the step of; said eye region has an average vectors and a fourth said input region has an average vector; and said eye region has an average vectors and eye area a plurality of deviation between each vector vectors and difference eye area, said input region has an average vector and mouth region a plurality of input is deviation between each vector difference region includes vector, said independent the step so as to calculate a descriptor, said plurality of eye area difference, thereby improving the efficiency of said through said PCA PCA value and for eye area, said plurality of input region difference, thereby improving the efficiency of said input region for said PCA through the PCA that calculates an includes, said step for learning type linear regression, said eye area PCA value and for said input region for independent variable on PCA value, said V value and said A and outputs the read data to the dependent variable value includes organizing the set; said set of data, configured through a linear regression analysis for calculating the coefficient regression type; and 99900011 89999 regression type coefficient for learning type linear regression including said includes, said linear regression on said distribution of said input Image related to the music information, the value of state mounted applied the step, said distribution of an input Image obtained from said database eigenvector projects the steps of calculating independent variable; and said said an argument of the linear regression on said distribution of said adjustor and the emotion of input Image including steps of estimating a state, continuous based facial expression of robot method emotion recognition.

According to Claim 1, said linear regression the recurrence relation, awakening (Arousal) - [...]tablet house-axis (A axis) and (Valence) - unpleasant axis (V axis) in the form of 2 dimensional, continuous based facial expression of robot method emotion recognition.

According to Claim 1, said images stored in a database pre-processing of a step, said vector-encoded images are for preventing piracy and for regulating the further including, continuous based facial expression of robot method emotion recognition.

According to Claim 1, said PCA the step so as to calculate a descriptor independently, said eye area difference vector through the covariance matrix is obtained the eye calculates the eigenvector region, said input region difference vector through the covariance matrix eigenvector region is obtained by providing the steps of calculating; and said eye area to said eye area difference vector eigenvector projects of eye area is in a reduced, dimensional PCA, calculating, said input said input region difference vector projects eigenvector region dimensional is in a reduced, input region including calculating value PCA, continuous based facial expression of robot method emotion recognition.

According to Claim 3, said vector-encoded images are for preventing piracy and for regulating the step, of each vector average/a distribution, and overall vector subsequent transactions to be conducted over an average/a distribution of an, continuous based facial expression of robot method emotion recognition.

According to Claim 1, an input Image applied on said distribution of said linear regression said mounted state information, the value of the step, the model emotion continuous, said emotion of every frame of successive input Image information, the value of state further including, continuous based facial expression of robot method emotion recognition.

Anti number 1 to number 6 anti one of continuous emotion recognition based facial expression of robot according to anti for carrying out method, storing a computer program which is readable with a computer by recording medium.

A axis (awakening (Arousal) - tablet house-axis) and V axis (axis unpleasant - [...] (Valence)) based on emotion axis a-2 device as a emotion a state discrimination, each emotion by images and said each emotion by images according to facial expression of the face region of the cuticle layer by emotion state A value and corresponding to said A axis corresponding to axis said V V value evaluated emotion state database and way, at the time, said distribution of a number of images stored in a database based on PCA (Principal Component Analysis, principal components analysis) calculating a independent variable through the number 1 PCA and, said independent variable calculated using linear regression type including a learning unit for learning offline part; and input Image mounted applied linear regression on said distribution of said information, the value of state comprises an recognition than the expression, the offline said, before said PCA said emotion each stored in a database pre-processing of images by partial DTx; further includes, a preprocessing unit 999000 1245999, emotion classified images of said part in a face region and an eye area vector images of gray scale of a vector and, said eye region has an average vectors and said input region has an average to obtain a steering vector a mean calculating section and, said eye region has an average vectors and said eye area each vector vectors and difference region the eyes deviation between said input region has an average vectors and said input region a deviation between each vector lip tea product vector difference region includes vector, said number 1 PCA regions, plurality of said eye area difference, thereby improving the efficiency of said through said PCA for eye area PCA value and a plurality of said input region difference, thereby improving the efficiency of said input region for said PCA through the PCA, calculating, said learning unit learns the, said eye area PCA value and for said input region for independent variable on PCA value, said V value and said A and outputs the read data to the dependent variable value constructs a set, said set of data, configured for linear regression analysis regression type MOS transistor including said coefficient regression type 99 90001267999 the learning the linear regression equation, said said cognition unit expression distribution of an input Image obtained from said database eigenvector projects number 2 PCA part calculating a independent variable; and said said an argument of the linear regression on said distribution of said adjustor and the emotion of input Image state output part can act of estimating the state including, continuous based facial expression of robot device emotion recognition.

According to Claim 8, said linear regression the recurrence relation, awakening (Arousal) - [...]tablet house-axis (A axis) and (Valence) - unpleasant axis (V axis) in the form of 2 dimensional, continuous based facial expression of robot device emotion recognition.

According to Claim 8, said expression cognition unit, said region face of input Image face region detector further including, continuous based facial expression of robot device emotion recognition.

According to Claim 8, said a preprocessing unit vector-encoded images are for preventing piracy and for regulating the normal reach the shadowed regions further including, continuous based facial expression of robot device emotion recognition.

According to Claim 8, said expression cognition unit, every frame of successive input Image of said emotion state that calculates an, continuous based facial expression of robot device emotion recognition.