Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6689. Отображено 100.
21-03-2013 дата публикации

FUNDUS IMAGE ACQUIRING APPARATUS AND CONTROL METHOD THEREFOR

Номер: US20130070988A1
Автор: Makihira Tomoyuki
Принадлежит: CANON KABUSHIKI KAISHA

Provided is a fundus image acquiring apparatus in which eyeball tracking can be performed by template matching even if sufficient luminance of a characteristic image of blood vessels or the like is not secured in a case where eye movement is detected accurately from a fundus image. The fundus image acquiring apparatus includes a fundus imaging unit for obtaining a fundus image, an extraction unit for extracting a characteristic image from an initial fundus image taken by the fundus image acquiring apparatus, an evaluation unit for evaluating luminance information of a characteristic point obtained through the extraction, and a setting unit for setting a frame rate for imaging by the fundus image acquiring apparatus. The frame rate is determined based on a result of the evaluation by the evaluation unit. 1. A fundus image acquiring apparatus , comprising:a first fundus image acquiring unit for acquiring a first fundus image including at least one fundus image of an eye to be inspected;an extraction unit for extracting a characteristic image from the acquired first fundus image;a second fundus image acquiring unit for acquiring a second fundus image including at least two fundus images of the eye to be inspected, of which luminance is different from luminance of the acquired first fundus image, based on luminance information of the extracted characteristic image; anda detection unit for detecting a movement of the eye to be inspected based on the acquired second fundus image.2. A fundus image acquiring apparatus according to claim 1 , further comprising a determination unit for determining a frame rate based on the luminance information of the extracted characteristic image claim 1 ,wherein the second fundus image acquiring unit acquires the second fundus image by using the determined frame rate.3. A fundus image acquiring apparatus according to claim 2 , wherein the frame rate is determined to be lower than the frame rate of acquiring the first fundus image in a case ...

Подробнее
06-06-2013 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20130142401A1
Принадлежит: CANON KABUSHIKI KAISHA

The image processing apparatus detects a region of a subject from an input image, and extracts an image feature amount from the region. Also, the apparatus classifies the subject into any one of plural attributes based on the image feature amount, and estimates, based on the image feature amount and an attribute into which it is classified, an attribute value of the subject belonging to the attribute into which it is classified. 1. An image processing apparatus comprising:a detection unit configured to detect a region of a subject from an input image;an extraction unit configured to extract an image feature amount from the region;a classification unit configured to classify the subject into any one of plural attributes based on the image feature amount; andan estimation unit configured to estimate, based on the image feature amount and an attribute into which said classification unit classifies, an attribute value of the subject belonging to the attribute into which said classification unit classifies.2. The apparatus according to claim 1 , wherein said extraction unit extracts the image feature amount for each of the plural attributes claim 1 , andsaid estimation unit estimates the attribute value of the subject based on the image feature amount extracted for the attribute into which said classification unit classifies.3. The apparatus according to claim 1 , further comprising a projection unit configured to project the image feature amount onto a feature space corresponding to each of the plural attributes;said classification unit classifies the subject into any one of the plural attributes based on the image feature amount projected onto the feature space corresponding to each of the plural attributes; andsaid estimation unit estimates, based on the image feature amount projected onto the feature space corresponding to the attribute into which said classification unit classifies, the attribute value of the subject.4. The apparatus according to claim 3 , wherein ...

Подробнее
06-06-2013 дата публикации

Method And System For Attaching A Metatag To A Digital Image

Номер: US20130142402A1
Принадлежит: Facedouble, Inc.

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database. 1. A method for tagging an image of an individual in a plurality of photos , the method comprising:providing a first plurality of photos, each of the first plurality of photos comprising an identified facial image of the individual;processing the facial image of the individual in each of the first plurality of photos to generate a feature vector for the individual, wherein the feature vector is based on a plurality of factors comprising facial expression, hair style, hair color, facial pose, eye color, texture of the face, color of the face and facial hair;analyzing a second plurality of photos to determine if a facial image of the individual is present in a photo of the second plurality of photos, the analysis comprising determining if a facial image in each of the photos of the second plurality of photos matches the feature vector for the individual;identifying each of the photos of the second plurality of photos having a facial image of the individual to create a third plurality of photos; andtagging each of the photos of the third plurality of photos to identify the facial image of the individual in each of the third plurality of photos, wherein tagging comprises determining the facial image in the photo, storing the facial image in the photo in a database, storing an identifier for the photo in the database, and storing an identifier for the individual in a database.2. The method according to wherein the step of tagging further comprises listing a URL or URI for each of the photos of the third plurality of photos in a database.3 ...

Подробнее
06-06-2013 дата публикации

IMAGE RECOGNITION APPARATUS, CONTROL METHOD FOR IMAGE RECOGNITION APPARATUS, AND STORAGE MEDIUM

Номер: US20130142426A1
Автор: Kaneda Yuji, Yano Kotaro
Принадлежит: CANON KABUSHIKI KAISHA

An image recognition apparatus comprises an image obtainment unit configured to obtain an image; a region setting unit configured to set at least one local region in the image; a feature extraction unit configured to extract feature patterns from the local region; a setting unit configured to set, out of a plurality of bins corresponding to a plurality of patterns which can form the feature patterns, bins that have been predetermined in accordance with a type of the local region as histogram bins used in generating a histogram; a generation unit configured to generate a histogram corresponding to the extracted feature patterns using the set histogram bins; and a recognition unit configured to perform image recognition using the generated histogram. 1. An image recognition apparatus comprising:an image obtainment unit configured to obtain an image;a region setting unit configured to set at least one local region in the image;a feature extraction unit configured to extract feature patterns from the local region;a setting unit configured to set, out of a plurality of bins corresponding to a plurality of patterns which can form the feature patterns, bins that have been predetermined in accordance with a type of the local region as histogram bins used in generating a histogram;a generation unit configured to generate a histogram corresponding to the extracted feature patterns using the set histogram bins; anda recognition unit configured to perform image recognition using the generated histogram.2. The image recognition apparatus according to claim 1 , further comprisingan acceptance unit configured to accept a selection of a recognition target on which the image recognition is performed by the recognition unit, whereinthe setting unit sets bins that have been predetermined in accordance with the type of a local region and the recognition target accepted by the acceptance unit as histogram bins used in generating a histogram.3. The image recognition apparatus according ...

Подробнее
13-06-2013 дата публикации

EXTERNAL CONTROLLER AND METHOD FOR DELAYING SCREEN LOCKING OF COMPUTING DEVICE

Номер: US20130148867A1
Автор: WANG HUA-YONG
Принадлежит:

An external controller for delaying screen locking of a computing device, the screen of the computing device is automatically locked after a preset period of inactivity of the computing device. The external controller includes a detection unit and a control unit. The detection unit accounts a period of time of how long the computing device is inactive, and detects whether at least one authorized user is in front of the screen when the accounted period of time is greater than a predetermined time period. The control unit generates a control command for interrupting the inactivity mode of the computing device to delay the screen from automatically locking for the preset period, when the at least one authorized user is detected in front of the screen. 1. An external controller of a computing device having a screen , the screen of the computing device being automatically locked after a pre-set period of inactivity of the computing device , the external controller comprising:a detection unit that accounts a period of time of how long the computing device is inactive, and detects whether at least one authorized user is in front of the screen when the accounted period of time is greater than a predetermined time period; anda control unit that generates a control command for interrupting the inactivity mode of the computing device, and sends the control command to the computing device for delaying the screen from automatically locking for the pre-setting period, when the at least one authorized user is detected in front of the screen.2. The external controller according to claim 1 , further comprising:a storage unit that stores a plurality of predetermined commands for interrupting the inactivity mode of the computing device.3. The external controller according to claim 2 , wherein the control command is generated by:acquiring one of the predetermined commands from the storage unit, wherein the acquired command is regarded as the control command.4. The external controller ...

Подробнее
13-06-2013 дата публикации

SYSTEM FOR SECURE IMAGE RECOGNITION

Номер: US20130148868A1
Принадлежит: GRADIANT

Disclosed embodiments include methods, apparatuses, and systems for secured image processing, image recognition, biometric recognition, and face recognition in untrusted environments. The disclosure includes a system for secure image recognition that comprises a secure biometric recognition system configured to work directly with encrypted signals, and the secure biometric recognition system comprises an input quantization system and a homomorphic encryption system configured for noninteractive biometric recognition. 1. A system for secure image recognition comprising a secure biometric recognition system configured to work directly with encrypted signals , said secure biometric recognition system comprising an input quantization system and a homomorphic encryption system configured for noninteractive biometric recognition using a hardware processor.2. The system of claim 1 , wherein said homomorphic encryption system is configured to operate both with binary and nonbinary plaintexts.3. The system of claim 2 , wherein said biometric recognition includes a verification process.4. The system of claim 3 , wherein said input quantization system is configured for nonlinear quantization.5. The system of claim 4 , wherein said homomorphic encryption system is configured to operate on biometric signals representing faces implementing a secure face recognition system.6. The system of claim 5 , wherein said biometric signals comprise vectors of Gabor coefficients.7. The system of claim 6 , wherein said verification process includes a polynomial function. This application claims the benefit of U.S. Provisional Application No. 61/596,151 filed on Feb. 7, 2012 by the present inventors, which is incorporated herein by reference. Furthermore, this application is a Continuation-in-Part of U.S. application Ser. No. 12/876,223 filed on Sep. 6, 2010 which claims the benefit of U.S. Provisional Application No. 61/240,177 filed on Sep. 4, 2009, which are incorporated herein by reference ...

Подробнее
20-06-2013 дата публикации

OPTICAL FLOW ACCELERATOR FOR MOTION RECOGNITION AND METHOD THEREOF

Номер: US20130156278A1
Автор: KIM Hyungon, PARK Jun Seok

An optical flow accelerator includes a face recognizing unit to recognize a face from a stereo image provided the optical flow accelerator. A depth information calculation unit calculates depth information of the recognized face on a basis of the recognized face. A face tracking unit tracks a size and a shape of the face depending on a movement direction when the recognized face moves. A controller controls generates depth information of the recognized face depending on an optical flow. 1. An optical flow accelerator , comprising:an image input unit configured to input a stereo image;a face recognizing unit configured to recognize a face from the stereo image;a depth information calculation unit configured to calculate depth information of the recognized face on a basis of the recognized face;a face tracking unit configured to track a size and a shape of the face depending on a movement direction when the recognized face moves; anda controller configured to control an operation of the face recognizing unit, the face tracking unit, and the depth information processing unit to generate depth information of the recognized face depending on an optical flow.2. The optical flow accelerator of claim 1 , wherein the controller is configured to erase a background claim 1 , excluding the recognized face from the input image claim 1 , and analyze the face movement information based on a partial image without the background to generate depth information depending on an optical flow.3. The optical flow accelerator of claim 1 , wherein the face tracking unit is configured to track the recognized face by frames to track the movement of the face when the recognized face moves.4. The optical flow accelerator of claim 1 , wherein the face recognizing unit is configured to locate the largest face in the input image to recognize a user for face recognition.5. The optical flow accelerator of claim 1 , wherein the depth information processing unit is configured to obtain depth ...

Подробнее
27-06-2013 дата публикации

Image sensing apparatus, information processing apparatus, control method, and storage medium

Номер: US20130163814A1
Автор: Hideo Takiguchi
Принадлежит: Canon Inc

Face recognition data to be used in recognizing a person corresponding to a face image is managed upon associating the feature amount of the face image, a first person's name, and a second person's name different from the first person's name with each other for each registered person. A person corresponding to a face image included in a captured image is identified using the feature amount managed in the face recognition data, and the second person's name for the identified person is stored in a storage in association with the captured image. When the image stored in the storage is read out and displayed on a display device, the first person's name which corresponds to the second person's name associated with the readout image is displayed on the display device together with the readout image.

Подробнее
27-06-2013 дата публикации

INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM

Номер: US20130163830A1
Автор: Matsushita Takahiro
Принадлежит: CANON KABUSHIKI KAISHA

An information processing apparatus generates, for each person, a first face dictionary storing face data concerning a face of the person included in an image. The apparatus receives, from an imaging apparatus, a second face dictionary which is stored face data of a person corresponding to the first face dictionary and which may be updated the face data using images obtained on the imaging apparatus, and stores. The information processing apparatus transmits the first face dictionary to the imaging apparatus when an update date and time of the second face dictionary is older than a date and time of the first face dictionary. The information processing apparatus does not transmit the first face dictionary to the imaging apparatus when the first face dictionary includes face data of the person included in an image captured outside of a predetermined period. 1. An information processing apparatus comprising:a generation unit configured to generate, for each person, a first face dictionary storing face data concerning a face of the person included in an image;a reception unit configured to receive a second face dictionary storing face data of a person corresponding to the first face dictionary from an imaging apparatus, the second face dictionary being updated by the imaging apparatus based on face data of the person included in an image obtained by the imaging apparatus;a storage unit configured to store the face data stored in the received second face dictionary in a predetermined storage device; anda transmission unit configured to transmit the first face dictionary to the imaging apparatus when an update date and time of the second face dictionary is older than a date and time of the first face dictionary, the second face dictionary being overwritten with the transmitted first face dictionary in the imaging apparatus,wherein said transmission unit does not transmit the first face dictionary to the imaging apparatus if the first face dictionary includes face data of ...

Подробнее
27-06-2013 дата публикации

SECURITY DEVICE WITH SECURITY IMAGE UPDATE CAPABILITY

Номер: US20130163833A1
Принадлежит: UTC FIRE & SECURITY CORPORATION

An exemplary security device includes a controller that determines whether a security credential for an individual corresponds to an authorized credential. The controller also determines whether an acquired image of the individual corresponds to a reference image that is associated with the authorized credential. The controller determines if a correspondence between the acquired image and the reference image satisfies a selected criterion when the security credential corresponds to the authorized credential. The controller updates the reference image responsive to the selected criterion being satisfied. The controller updates the reference image by including the acquired image into the reference image and weighting the acquired image higher than other image information previously incorporated into the reference image. 1. A security device , comprising:a controller that is configured todetermine whether a security credential for an individual corresponds to an authorized credential and an acquired image of the individual corresponds to a reference image that is associated with the authorized credential,determine if a correspondence between the acquired image and the reference image satisfies a selected criterion when the security credential corresponds to the authorized credential, andupdate the reference image responsive to the selected criterion being satisfied, the controller updating the reference image by including the acquired image into the reference image and weighting the acquired image higher than other image information previously incorporated into the reference image.2. The security device of claim 1 , wherein the reference image comprises an image of the individual's face and the acquired image comprises an image of the individual's face.3. The security device of claim 1 , comprisinga credential reader that acquires the security credential from the individual and provides an indication of the acquired security credential to the controller; anda camera ...

Подробнее
04-07-2013 дата публикации

COMPUTER-IMPLEMENTED METHOD, A COMPUTER PROGRAM PRODUCT AND A COMPUTER SYSTEM FOR IMAGE PROCESSING

Номер: US20130170738A1
Принадлежит:

The present description refers in particular to a computer-implemented method, a computer program product and a computer system for image processing, the method comprising: receiving at least one user image; identifying a plurality of image classification elements of the user image by: assigning an initial classification to the user image, wherein the initial classification is based on temporal data associated with the user image; determining at least one image label that globally describes content of the user image; calculating a label correctness value for each image label; recognizing at least one image component of the user image; calculating a component correctness value for each image component; correlating the image label and the image component using the label correctness value and the component correctness value, whereby a correlated image label and a correlated image component are identified; applying a rule to determine a category of the user image, wherein the rule is based on at least one of the following: the temporal data, the correlated image label and the correlated image component; and producing a final classification of the user image including the following image classification elements: the initial classification, the correlated image label, the correlated image component, and the category. 1. A computer-implemented method for image processing , the method comprising:{'b': 701', '506, 'receiving () at least one user image ();'} assigning an initial classification to the user image, wherein the initial classification is based on temporal data associated with the user image;', 'determining at least one image label that globally describes content of the user image;', 'calculating a label correctness value for each image label;', {'b': 705', '508', '510, 'recognizing () at least one image component (, ) of the user image;'}, {'b': 508', '510, 'calculating a component correctness value for each image component (, );'}, {'b': 708', '508', '510, ' ...

Подробнее
11-07-2013 дата публикации

Device and method for internally and externally assessing whitelists

Номер: US20130177238A1
Автор: Hiroaki Yoshio
Принадлежит: Panasonic Corp

A white list inside or outside determining apparatus includes: a first feature data extracting unit which extracts first feature data from an image by using a first transformation formula created based on preliminary learning images; a second feature data extracting unit which extracts second feature data from an image by using a second transformation formula created from the preliminary learning images and application learning images; a first matching unit which performs matching between a registration image and a collation image by using the first transformation formula; and a second matching unit which performs matching between a registration image and a collation image by using the second transformation formula. Weights of a matching result of the first matching unit and a matching result of the second matching unit are changed according to the number of preliminary learning images and the number of application learning images.

Подробнее
18-07-2013 дата публикации

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

Номер: US20130182914A1
Автор: KONDO Masao, SAKAI Yusuke
Принадлежит: SONY CORPORATION

An information processing apparatus may include a user recognition unit to recognize a user in a captured image, and a behavior recognition unit to recognize a behavior of a user. In addition, the apparatus may include a generation unit to generate user behavior information including information of the recognized user and the recognized behavior of the recognized user. Further, the apparatus may include a communication unit to transmit the user behavior information to an external apparatus. 1. An information processing apparatus comprising:a user recognition unit to recognize a user in a captured image;a behavior recognition unit to recognize a behavior of a user;a generation unit to generate user behavior information including information of the recognized user and the recognized behavior of the recognized user; anda communication unit to transmit the user behavior information to an external apparatus.2. The apparatus of claim 1 , wherein the user recognition unit recognizes the user in the captured image based on face detection.3. The apparatus of claim 1 , wherein the user behavior information indicates at least one of an act of appearing in the captured image claim 1 , a facial behavior claim 1 , speech or an operation action.4. The apparatus of further comprising:an image capture unit to capture the captured image.5. The apparatus of claim 1 , wherein the user behavior information includes a part of the captured image.6. An information processing apparatus comprising:a communication unit to receive from an external apparatus user behavior information including information of a recognized user and a recognized behavior of the recognized user; anda generation unit to control display on a display screen of an image including a user image corresponding to the information of the recognized user and a visual representation corresponding to the information of the recognized behavior.7. The apparatus of claim 6 , wherein the visual representation indicates at least one ...

Подробнее
18-07-2013 дата публикации

IMAGE CAPTURE APPARATUS, CONTROL METHOD OF IMAGE CAPTURE APPARATUS, AND RECORDING MEDIUM

Номер: US20130182919A1
Автор: Tanaka Shuya
Принадлежит: CANON KABUSHIKI KAISHA

An image capture apparatus include transmission unit for transmitting first feature data concerning a face region included in an image captured by an image sensor to an external apparatus, reception unit for receiving a matching result between the first feature data and second feature data concerning a sub object from the external apparatus, storage unit for storing third feature data concerning a main object in a predetermined storage area in advance, matching unit for matching the first feature data with the third feature data and display unit for identifiably displaying, on a display device, the face region recognized as the sub object in the matching result received by the reception unit and the face region recognized as the main object in the matching result obtained by the matching unit. 1. An image capture apparatus comprising:transmission unit for transmitting first feature data concerning a face region included in an image captured by an image sensor to an external apparatus;reception unit for receiving a matching result between the first feature data and second feature data concerning a sub object from the external apparatus;storage unit for storing third feature data concerning a main object in a predetermined storage area in advance;matching unit for matching the first feature data with the third feature data; anddisplay unit for identifiably displaying, on a display device, the face region recognized as the sub object in the matching result received by the reception unit and the face region recognized as the main object in the matching result obtained by the matching unit.2. The apparatus according to claim 1 , wherein the transmission unit transmits one of the captured image claim 1 , a face image extracted from the captured image or a feature amount calculated from the face image to the external apparatus as the first feature data.3. The apparatus according to claim 1 , further comprising recording unit for claim 1 , when the face region recognized as ...

Подробнее
25-07-2013 дата публикации

METHOD FOR FINDING AND DIGITALLY EVALUATING ILLEGAL IMAGE MATERIAL

Номер: US20130188842A1
Автор: HAUKE Rudolf
Принадлежит: ATG Advanced Swiss Technology Group AG

A method for finding and digitally evaluating illegal image material is provided, wherein a data memory is searched for image material. Image material that is found is classified as potentially illegal image material or as legal image material by means of a classification method on the basis of an image content that is presented. The image material graded as potentially illegal has the age of the persons shown determined, and potentially illegal image material which shows at least one person whose ascertained age is below a prescribed age is graded as illegal image material. Biometric features of the persons shown in the illegal image material are detected and are compared with at least one database which contains biometric features. In the illegal image material, at least one further feature which it contains is detected and is compared with at least one appropriate database. 1. A method for finding and digitally evaluating illegal image material , wherein a data memory is searched for image material , the method comprising:classifying found image material via a classification method based on a depicted image content as potentially illegal image material or as legal image material;performing an age determination of the depicted persons in the image material classified as potentially illegal and potentially illegal image material that shows at least one person whose determined age falls below a predetermined age and is classified as illegal image material;detecting biometric features of the persons shown in the illegal image material and comparing the biometric features with at least one database containing biometric features; anddetecting at least one contained additional feature in the illegal image material and comparing the at least one contained additional feature with at least one relevant database.2. The method according to claim 1 , wherein that the predetermined age is predetermined by a user of the method.3. The method according to claim 1 , wherein body ...

Подробнее
25-07-2013 дата публикации

Automated indexing for distributing event photography

Номер: US20130188844A1
Автор: David A. Goldberg
Принадлежит: Hysterical Sunset Ltd

The present method relates to the automated indexing of event images for distribution. The automated indexing can use automated facial recognition to determine which people are in each image. The images indexed in this fashion can be presented in a gallery, ordered by characteristics of the people in the images such as their name or room number, so as to facilitate the selection of the images by the people. The identification of the people in the images can be assisted by security or other information regarding the people that may be available to the event manager. Furthermore, the closeness of the relationships of two people can be inferred from the degree to which the people are in the same images, allowing the people in the images to be placed into groups, which can be hierarchical and/or overlapping, and which can assist in the organization of images being presented to the people, either in a gallery or electronic display format.

Подробнее
01-08-2013 дата публикации

Real-Time Face Tracking in a Digital Image Acquisition Device

Номер: US20130195318A1

An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image. 1. A method of detecting faces in an image stream , the method comprising:receiving an acquired image from said image stream including one or more face regions;sub-sampling said acquired image at a specified resolution to provide a sub-sampled image;identifying one or more regions of said acquired image predominantly including skin tones by applying one of a number of filters to define said one or more regions of said acquired image predominantly including skin tones;calculating a corresponding integral image for at least one of said skin tone regions of said sub-sampled acquired image;applying face detection to at least a portion of said integral image to provide a set of one or more candidate face regions each having a given size and a respective location; andadjusting focus distance incrementally for each of multiple new acquired images at least until at least one candidate face region is detected.2. A method as claimed in claim 1 , wherein the applying face detection comprises applying relaxed face detection parameters.3. A method as claimed in claim 1 , wherein responsive to failing to detect at least one face region for said image claim 1 , the method further comprises enhancing the contrast of the luminance characteristics for at least a region corresponding to one of said skin tone regions in a subsequently acquired image ...

Подробнее
08-08-2013 дата публикации

METHOD OF RECONSTRUCTING THREE-DIMENSIONAL FACIAL SHAPE

Номер: US20130202162A1

A method of reconstructing a three-dimensional (3D) facial shape with super resolution even from a short moving picture having a front facial image by acquiring a super-resolution facial image by applying, as a weighting factor, a per-unit-patch similarity between a target frame and frames remaining after excluding the target frame from among a plurality of continuous frames including the front facial image, and reconstructing the 3D facial shape based on the acquired super-resolution facial image. 1. A method of reconstructing a three-dimensional (3D) facial shape , the method comprising:designating a target frame to be used for 3D reconstruction from among a plurality of frames including a front facial image;recognizing a facial region in each of the plurality of frames and at least one feature point region in the facial region;warping frames remaining after excluding the target frame from among the plurality of frames, to match a facial region and at least one feature point region in each of the remaining frames with a facial region and at least one feature point region of the target frame, respectively;magnifying the facial regions of the target frame and the warped remaining frames to a size corresponding to a super resolution higher than that of the front facial image;dividing each of the magnified facial regions of the target frame and the warped remaining frames into a plurality of unit patches; andacquiring a super-resolution facial image for 3D facial shape reconstruction by deforming all pixels forming the facial region of the target frame based on similarities between unit patches of the target frame and unit patches of each of the remaining frames.2. The method of claim 1 , wherein the magnifying comprises magnifying the facial regions by using a simple interpolation scheme.3. The method of claim 1 , wherein the warping comprises deforming each of the remaining frames to respectively match relative locations of a facial region and at least one feature ...

Подробнее
15-08-2013 дата публикации

HUMAN FACIAL IDENTIFICATION METHOD AND SYSTEM, AS WELL AS INFRARED BACKLIGHT COMPENSATION METHOD AND SYSTEM

Номер: US20130208953A1
Автор: Yuan Juntao
Принадлежит: HANWANG TECHNOLOGY CO LTD

A human facial identification method and system are disclosed, which belong to the mode identification technical field and intend to improve veracity of human facial identification implemented in outdoor environment. The human facial identification method includes: driving a group of infrared lamps by a high frequency pulse signal to generate infrared backlight, collecting identified human facial features in irradiation of the infrared backlight, comparing the collected identified human facial features with a human facial template to complete the human facial identification. A human facial compensation method and system are also disclosed, which are intend to improve luminance of infrared backlight. The technical solutions are mostly applied to the human image identification field. 1. A human facial identification method , characterized in that it comprises:driving a group of infrared lamps with a high-frequency pulse signal to generate infrared backlight;collecting features of a human face to be identified under illumination of the infrared backlight; andcomparing the collected features of the human face to be identified with human facial templates to identify the human face.2. The human facial identification method according to claim 1 , characterized in that it further comprises generating the high-frequency pulse signal before driving the group of infrared lamps with the high-frequency pulse signal to generate the infrared backlight claim 1 , wherein generating the high-frequency pulse signal comprises:generating a clock signal by an active crystal oscillator; andgenerating the high-frequency pulse signal by dividing frequency of the clock signal.3. The human facial identification method according to claim 1 , characterized in that driving the group of infrared lamps by the high-frequency pulse signal to generate the infrared backlight comprises:turning on the group of infrared lamps when the high-frequency pulse signal switches to a high level;resetting and ...

Подробнее
29-08-2013 дата публикации

Apparatus and method for identifying fake face

Номер: US20130223681A1
Принадлежит: SUPREMA INC

An apparatus for identifying a fake face is provided. A first eye image acquirer acquires a first eye image by taking a picture of a subject while radiating a first ray having a first wavelength. A second eye image acquirer acquires a second eye image by taking a picture of the subject while radiating a second ray having a second wavelength that is shorter than the first wavelength. A controller extracts a first area and a second area having brighter lightness than the first area from each of the first and second eye images, calculates a lightness of the first area and a lightness of the second area in the first eye image, and a lightness of the first area and a lightness of the second area in the second eye image, and determines whether the subject uses a fake face based on the calculated lightness.

Подробнее
29-08-2013 дата публикации

Demographic Analysis of Facial Landmarks

Номер: US20130223694A1
Принадлежит:

A set of training vectors may be identified. Each training vector may be mapped to either a male gender or a female gender, and each training vector may represent facial landmarks derived from a respective facial image. An input vector of facial landmarks may also be identified. The facial landmarks of the input vector may be derived from a particular facial image. A feature vector may containing a subset of the facial landmarks may be selected from the input vector. A weighted comparison may be performed between the feature vector and each of the training vectors. Based on a result of the weighted comparison, the particular facial image may be classified as either the male gender or the female gender. 1. A method comprising:identifying a set of training vectors, wherein each training vector is mapped to either a male gender or a female gender, and wherein each training vector represents facial landmarks derived from a respective facial image;identifying an input vector of facial landmarks, wherein the facial landmarks of the input vector are derived from a particular facial image;selecting, from the input vector, a feature vector containing a subset of the facial landmarks;performing, by a computing device, a weighted comparison between the feature vector and each of the training vectors; andbased on a result of the weighted comparison, classifying the particular facial image as either the male gender or the female gender.2. The method of claim 1 , wherein the feature vector is less than one-half the size of the input vector.3. The method of claim 1 , wherein the feature vector is less than one-quarter the size of the input vector.4. The method of claim 1 , wherein selecting the feature vector comprises using a memetic algorithm to select the feature vector claim 1 , wherein the memetic algorithm determines the fitness of a candidate feature vector based on a classification accuracy of a facial image associated with the candidate feature vector claim 1 , and ...

Подробнее
29-08-2013 дата публикации

Face Tracking for Controlling Imaging Parameters

Номер: US20130223698A1

A method of tracking faces in an image stream with a digital image acquisition device includes receiving images from an image stream including faces, calculating corresponding integral images, and applying different subsets of face detection rectangles to the integral images to provide sets of candidate regions. The different subsets include candidate face regions of different sizes and/or locations within the images. The different candidate face regions from different images of the image stream are each tracked. 1acquiring an image received from an image stream including one or more face regions;calculating an integral image for at least a portion of the image or a sub-sampled version of at least a portion of the image, or both;applying face detection to at least a portion of the integral image to provide a set of one or more candidate face regions;using a database, including applying face recognition to one or more candidate face regions and providing an identifier for a recognized face;storing the identifier for the recognized face in association with at least one image or portion thereof that include the recognized face from the image stream; andadjusting white balance, color balance, focus, or exposure, or combinations thereof, for the recognized face.. A face detection and recognition method, comprising: This application is a continuation of U.S. patent application Ser. No. 12/814,245, filed Jun. 11, 2010; which is a continuation of U.S. patent application Ser. No. 12/479,593, filed Jun. 5, 2009, now U.S. Pat. No. 7,916,897; which is a continuation-in-part (CIP) of U.S. patent application Ser. No. 12/141,042, filed Jun. 17, 2008, now U.S. Pat. No. 7,620,218; which claims priority to U.S. Ser. No. 60/945,558, filed Jun. 21, 2007; and which is a CIP of U.S. Ser. No. 12/063,089, filed Feb. 6, 2008, now U.S. Pat. No. 8,055,029; which is a CIP of U.S. Ser. No. 11/766,674, filed Jun. 21, 2007, now U.S. Pat. No. 7,460,695; which is a CIP of U.S. Ser. No. 11/753,397, ...

Подробнее
29-08-2013 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE MEDIUM

Номер: US20130223738A1
Автор: Obara Eiki
Принадлежит:

According to one embodiment, an image processing apparatus includes a detail extraction module, a detail addition control module, and a detail component addition module. The detail extraction module extracts a detail component from an image signal of one frame. The detail addition control module controls an addition quantity of a detail component. The detail component addition module adds a detail component controlled by the detail addition control module to the image signal. 1. An image processing apparatus comprising:a detail extraction module configured to extract a detail component from an image signal of one frame;a detail addition control module configured to control an addition quantity of a detail component; anda detail component addition module configured to add a detail component controlled by the detail addition control module to the image signal.2. The apparatus of claim 1 , whereinthe detail addition control module is configured to decrease an addition quantity of a detail component from a lower end of the frame toward an upper end of the frame.3. The apparatus of claim 1 , whereinthe detail addition control module is configured to decrease an addition quantity of a detail component from a short-distance region toward a long-distance region in the frame.4. The apparatus of claim 1 , further comprising:a detection module configured to detect a face region or a skin color region in the image signal,wherein the detail addition control module is configured fix an addition quantity at a fixed quantity of a detail component for the face region or the skin color region detected by the detection module.5. The apparatus of claim 1 , further comprising:a detection module configured to detect a face region or a skin color region in the image signal,wherein the detail addition control module is configured to fix an addition quantity at a fixed quantity of a detail component for a whole frame, if a rate of the face region or the skin color region detected by the ...

Подробнее
05-09-2013 дата публикации

Face Recognition Using Face Tracker Classifier Data

Номер: US20130229545A1

A method of determining face recognition profiles for a group persons includes determining with a multi-classifier face detector that a face region within a digital image has above a threshold probability of corresponding to a first person of the group, and recording probability scores which are analyzed for each classifier, including determining a mean and variance for each classifier for the first person. The process is repeated for one or more other persons of the group. A sub-set of classifiers is determined which best differentiates between the first person and the one or more other persons. The sub-set of classifiers is stored in association with the first person as a recognition profile. 1. (canceled)2. A method of in-camera face recognition training of a specific face within digital images acquired with a portable camera-enabled device , comprising:using a lens, image sensor and processor of a portable camera-enabled device to acquire digital images;generating in the device, capturing or otherwise obtaining in the device multiple different images that include a face of a specific person;identifying groups of pixels that correspond to the face within the multiple different images;tracking the face within the multiple different images, wherein the tracking is performed in parallel with determining whether the identified face corresponds to the specific person;selecting sets of classifiers as matching the faces identified in the multiple different images;statistically analyzing the sets of classifiers to generate one or more reference classifier profiles of the face associated with the specific person, wherein the statistically analyzing comprises determining variance values for the sets of classifiers;normalizing the reference classifier profiles to determine normalized face classifiers of an average face associated with the specific person;generating a face recognition profile for the specific person based on the normalized face classifiers of the average ...

Подробнее
12-09-2013 дата публикации

Computationally Efficient Feature Extraction and Matching Iris Recognition

Номер: US20130236067A1
Принадлежит: BIOMETRICORE, INC.

A method and system for uniquely identifying a subject based on an iris image. After obtaining the iris image, the method produces a filtered iris image by applying filters to the iris image to enhance discriminative features of the iris image. The method analyzes an intensity value for pixels in the filtered iris image to produce an iris code that uniquely identifies the subject. The method also creates a segmented iris image by detecting an inner and outer boundary for an iris region in the iris image, and remapping pixels in the iris region, represented in a Cartesian coordinate system, to pixels in the segmented iris image, represented in a log-polar coordinate system, by employing a logarithm representation process. The method also creates a one-dimensional iris string from the iris image by unfolding the iris region by employing a spiral sampling method to obtain sample pixels in the iris region, wherein the sample pixels are the one-dimensional iris string. 114-. (canceled)15. A method for creating a segmented iris image from an image of an eye of a subject , comprising:obtaining the image of the eye;detecting an iris region in the image of the eye, the iris region having an inner boundary and an outer boundary; andremapping pixels in the iris region, represented in a Cartesian coordinate system, to pixels in the segmented iris image, represented in a log-polar coordinate system, by employing a logarithm representation process.16. The method of claim 15 , wherein the detecting of the iris region further comprises:detecting a first approximate circle to define the inner boundary of the iris region, wherein the inner boundary is between a pupil of the eye and the iris;detecting a second approximate circle to define the outer boundary of the iris region, wherein the outer boundary is between the iris and a sclera of the eye;applying an iris boundary fitting process to a number of initial points on the first approximate circle to refine the inner boundary of the ...

Подробнее
12-09-2013 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20130236072A1
Принадлежит: SONY CORPORATION

An image processing apparatus includes a face detector detecting face images from still-image frames successively extracted from a moving-image stream in accordance with image information items regarding the still-image frames, a face-feature-value calculation unit calculating face feature values of the face images in accordance with image information items regarding the face images, an identity determination unit determining whether a first face image in a current frame and a second face image in a previous frame represent an identical person in accordance with at least face feature values of the first and second face images, and a merging processor which stores one of the first and second face images when the first face image and the second face image represent an identical person, and which stores the first and second face images when the first face image and the second face image do not represent an identical person. 120.-. (canceled)21. An image processing apparatus comprising:a face detector configured to detect face images included in a plurality of still images received by internet communication;an identity determination unit configured to determine whether a detected first face image included in a first image and a detected second face image included in a second image which has been stored represent an identical person; anda display processing unit configured to display one of the first and second face images in a browsing application which displays characters in a plurality of still images when it is determined that the first face image and the second face image represent the identical person.22. The image processing apparatus of claim 21 , comprising a representative-image determination unit configured to determine a representative face image from a plurality of face images that are determined by the identity determination unit to represent an identical person.23. The image processing apparatus of claim 22 , wherein the display processing unit is ...

Подробнее
19-09-2013 дата публикации

SYSTEM AND METHOD FOR DYNAMIC ADAPTION OF MEDIA BASED ON IMPLICIT USER INPUT AND BEHAVIOR

Номер: US20130243270A1
Автор: Ferens Ron, Kamhi Gila
Принадлежит:

A system and method for dynamically adapting media having multiple scenarios presented on a media device to a user based on characteristics of the user captured from at least one sensor. During presentation of the media, the at least one sensor captures user characteristics, including, but not limited to, physical characteristics indicative of user interest and/or attentiveness to subject matter of the media being presented. The system determines the interest level of the user based on the captured user characteristics and manages presentation of the media to the user based on determined user interest levels, selecting scenarios to present to the user on user interest levels. 1. An apparatus for dynamically adapting presentation of media to a user , said apparatus comprising:a face detection module configured to receive an image of a user and detect a facial region in said image and identify one or more user characteristics of said user in said image, said user characteristics being associated with corresponding subject matter of said media; anda scenario selection module configured to receive data related to said one or more user characteristics and select at least one of a plurality of scenarios associated with media for presentation to said user based, at least in part, on said data related to said one or more user characteristics.2. The apparatus of claim 1 , wherein said scenario selection module comprises:an interest level module configured to determine a user's level of interest in said subject matter of said media based on said data related to said one or more user characteristics; anda determination module configured to identify said at least one scenario for presentation to said user based on said data related to said user's level of interest, said at least one identified scenario having subject matter related to subject mater of interest to said user.3. The apparatus of claim 1 , wherein said received image of said user further comprises information ...

Подробнее
19-09-2013 дата публикации

Method and System for Multi-Modal Identity Recognition

Номер: US20130246270A1
Принадлежит: O2 Micro Inc.

A device, a system and a method are provided for multi-modal identity recognition. The device includes a face recognition unit, a voice recognition unit, and a control unit. The face recognition unit is configured for generating a first recognition result by obtaining and processing face recognition information of a customer and by comparing the processed face recognition information with face recognition information stored in a facial feature database. The voice recognition unit is configured for generating a second recognition result by obtaining and processing voice recognition information of a customer and by comparing the processed voice recognition information with voice recognition information stored in an audio signature database. The control unit is configured for confirming an identity of the customer based on the first recognition result and the second recognition result. 1. An identity recognition device , comprising:a face recognition unit, configured for generating a first recognition result by obtaining and processing face recognition information of a customer and by comparing the processed face recognition information with face recognition information stored in a facial feature database;a voice recognition unit, configured for generating a second recognition result by obtaining and processing voice recognition information of a customer and by comparing the processed voice recognition information with voice recognition information stored in an audio signature database; anda control unit, configured for confirming an identity of the customer based on the first recognition result and the second recognition result.2. The identity recognition device of claim 1 , whereinthe processed face recognition information comprises first face recognition information and second face recognition information;the face recognition unit obtains the first face recognition information and the second face recognition information based on a first facial image from a first ...

Подробнее
26-09-2013 дата публикации

Facial Features Detection

Номер: US20130251202A1
Принадлежит: ST-Ericsson SA

There is described a method for facial features detection in a picture frame containing a skin tone area, comprising dividing () the skin tone area into a number of parts; and for each part of the skin tone area, constructing () a luminance map, constructing an edge map by extracting () edges from the luminance map, defining () an edge magnitude threshold, building () a binary map from the edge map by keeping only the edges having a magnitude beyond the defined edge magnitude threshold and eliminating the others; and then extracting () facial features from the built binary map. An inter-related facial features detector is further described. 114-. (canceled)15. A method for facial features detection in a picture frame containing a skin tone area , comprising:dividing the skin tone area into a number of parts; and constructing a luminance map;', 'constructing an edge map by extracting edges from the luminance map;', 'defining an edge magnitude threshold;', 'building a binary map from the edge map by keeping only edges having a magnitude beyond the defined edge magnitude threshold and eliminating the others; and', 'extracting facial features from the built binary map., 'for each part of the skin tone area16. The method of claim 15 , wherein the predetermined number of parts is equal to four claim 15 , with each part representing a respective quarter of the skin tone area.17. The method of claim 15 , wherein claim 15 , for each part of the skin tone area claim 15 , the edge magnitude threshold is defined to keep a given percentage of edges based on their ranked edge magnitude values claim 15 , when building the binary map.18. The method of claim 15 , further comprising:calculating an average luminance within the skin tone area; anddefining an absolute rejection threshold dependent on the calculated average luminance, wherein in each part of the skin tone area, only edges having a magnitude beyond the absolute rejection threshold are kept for building the binary map.19. ...

Подробнее
26-09-2013 дата публикации

Method and Apparatus to Incorporate Automatic Face Recognition in Digital Image Collections

Номер: US20130251217A1
Принадлежит:

A method and apparatus for creating and updating a facial image database from a collection of digital images is disclosed. A set of detected faces from a digital image collection is stored in a facial image database, along with data pertaining to them. At least one facial recognition template for each face in the first set is computed, and the images in the set are grouped according to the facial recognition template into similarity groups. Another embodiment is a naming tool for assigning names to a plurality of faces detected in a digital image collection. A facial image database stores data pertaining to facial images detected in images of a digital image collection. In addition, the naming tool may include a graphical user interface, a face detection module that detects faces in images of the digital image collection and stores data pertaining to the detected faces in the facial image database, a face recognition module that computes at least one facial recognition template for each facial image in the facial image database, and a similarity grouping module that groups facial images in the facial image database according to the respective templates such that similar facial images belong to one similarity group. 1. A method for creating and updating a facial image database from a collection of digital images , comprising:detecting a set of facial images in images from the collection of digital images;detecting a gender of an individual in each image in the set of facial images;grouping the set of facial images into similarity groups based at least in part on the detected genders, wherein facial recognition templates of facial images in each of the similarity groups are within a predetermined range;displaying one or more of the similarity groups in a graphical user interface, wherein each of the similarity groups is substantially separately displayed;receiving user input to confirm or reject individual facial images in one or more of the displayed similarity ...

Подробнее
03-10-2013 дата публикации

System and Method for Matching Faces

Номер: US20130259327A1
Принадлежит:

Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for matching faces. The method includes receiving an image of a face of a first person from a device of a second person, comparing the image of the face of the first person to a database of known faces in a contacts list of the second person, identifying a group of potential matching faces from the database of known faces, and displaying to the second person the group of potential matching faces. In one variation, the method receives input selecting one face from the group of potential matching faces and displays additional information about the selected one face. In a related variation, the method displays additional information about one or more face in the displayed group of potential matching faces without receiving input. 1. A method comprising:receiving an image of a face of a first person from a device of a second person;identifying a group of potential matching faces from a database of known faces associated with a contacts list of the second person;communicating to the device of the second person images of the group of potential matching faces, wherein the images are associated with the contacts list;receiving from the device of the second person a subcategory in the database of known faces; andremoving faces which are not part of the subcategory from the group of potential matching faces.2. The method of claim 1 , further comprising:receiving an input of a selection of one face from the group of potential matching faces; anddisplaying additional information about the one face.3. The method of claim 2 , the method further comprising adding the image of the face of the first person to the database of known faces with a link to the one face.4. The method of claim 2 , wherein the additional information comprises contact information.5. The method of claim 1 , the method further comprising displaying additional information about a face in the group of potential ...

Подробнее
10-10-2013 дата публикации

MONITORING APPARATUS, METHOD, AND PROGRAM

Номер: US20130266196A1
Принадлежит: Omron Corporation

An entrance and in-store matching unit searches a first mismatched face image that is not matched with a face image captured at an entrance from all face images, which are captured in a store and registered in a biological information DB. An entrance and exit matching unit searches a second mismatched face image that is not matched with a face image captured at an entrance from all the face images, which are captured at an exit and registered in the biological information DB. An exit and in-store matching unit searches a matched face image that is matched with the second mismatched face image among the first mismatched face images. An entrance information registration unit registers the searched matched face image in the biological information DB as the face image captured at the entrance. The present invention can be applied to a monitoring system. 1. A monitoring apparatus comprising:a plurality of image capturers configured to capture a face image of a matching target person in at least first to third areas;storage configured to store the face image of the matching target person captured in each of the first to third areas in a database together with information identifying each of the first to third areas;a first search unit configured to match the face images captured in the first area among the face images registered in the database against the face images captured in the second area, and searching a first mismatched face image that is not matched with the face images captured in the first area for all the face images captured in the second area;a second search unit configured to match the face images captured in the first area among the face images registered in the database against the face images captured in the third area, and searching a second mismatched face image that is not matched with the face images captured in the first area from all the face images captured in the third area;a third search unit configured to search a matched face image, which is ...

Подробнее
17-10-2013 дата публикации

MONITORING APPARATUS, METHOD, AND PROGRAM

Номер: US20130272584A1
Принадлежит: Omron Corporation

A suspicious person determination unit determines whether the face image of a matching target person is registered in a biological information DB by a matching result of a matching unit. When the face image of the matching target person is registered, area storage stores a specified area while correlating the specified area with a personal ID. A provisional registration unit makes a provisional registration of a suspicious person flag while correlating the suspicious person flag with the personal ID when a pattern of the specified area is a behavioral pattern of a suspicious person. A definitive registration unit makes a definitive registration of the suspicious person flag while correlating with the personal ID, when the provisional registration of the suspicious person flag is made for the face image of the matching target person, and when the face image of the matching target person is captured at a premium exchange counter. 1. A monitoring apparatus comprising:a plurality of image capturers configured to capture a face image of a matching target person;an accumulation unit configured to accumulate the face image of an accumulator in an accumulator database;area storage configured to store an area where the face image is captured with respect to each of the plurality of image capturers;an area specifying unit configured to specify the area where the face image of the matching target person is captured by the image capturers based on information stored in the area storage;a matcher configured to perform matching by calculating a degree of similarity between the face image of the matching target person and the face image of the accumulator stored in the accumulator database;a matching determination unit configured to determine whether the face image of the matching target person is matched with the face image registered in the accumulator database by comparing the degree of similarity, which is of a matching result of the matcher, to a predetermined threshold;area ...

Подробнее
14-11-2013 дата публикации

Identifying Facial Expressions in Acquired Digital Images

Номер: US20130300891A1
Принадлежит:

A face is detected and identified within an acquired digital image. One or more features of the face is/are extracted from the digital image, including two independent eyes or subsets of features of each of the two eyes, or lips or partial lips or one or more other mouth features and one or both eyes, or both. A model including multiple shape parameters is applied to the two independent eyes or subsets of features of each of the two eyes, and/or to the lips or partial lips or one or more other mouth features and one or both eyes. One or more similarities between the one or more features of the face and a library of reference feature sets is/are determined. A probable facial expression is identified based on the determining of the one or more similarities. 1. A method of initiating one or more further actions based on recognizing a facial expression and an identity of a face within a digital image , comprising:using a processor;acquiring a digital image;detecting a face within the digital image;applying a facial model to the face to identify the face or a facial expression of the face, or both, within the digital image;separately extracting one or more features of the face within the digital image, including two independent eyes or subsets of features of each of the two eyes, or lips or partial lips or one or more other mouth features and one or both eyes, or both;applying a feature model including multiple shape parameters to said two independent eyes or subsets of features of each of the two eyes, or to said lips or partial lips or one or more other mouth features and one or both eyes, or both;determining one or more similarities between the one or more features of the face and a library of reference feature sets;identifying a probable facial expression based on the determining of the one or more similarities; andinitiating one or more further actions based on the identity of the face and the probable facial expression.2. The method of claim 1 , wherein the one or ...

Подробнее
14-11-2013 дата публикации

Electronic device and photo management method thereof

Номер: US20130300896A1
Автор: YANG XIN

A photo management method implemented by an electronic device having a first camera and a second camera includes capturing a first photo by the first camera and capturing a simultaneous second photo by the second camera of the face of a user. Facial characteristics are extracted from the second photo and the characteristics are added to attribute information of the first photo. When the user wants to browse photos showing or including himself/herself, a third photo of the user is captured and facial characteristics extracted. One or more first photos are determined according to the facial characteristics of the third photo, and the one or more first photos showing or including the user are collected in one group and displayed.

Подробнее
14-11-2013 дата публикации

VIDEO ANALYSIS

Номер: US20130301876A1
Автор: HUGOSSON Fredrik
Принадлежит: AXIS AB

A method () and an object analyzer () for analyzing objects in images captured by a monitoring camera () uses a first and a second sequence of image frames, wherein the first sequence of image frames covers a first image area () and has a first image resolution, and the second sequence of image frames covers a second image area () located within the first image area () and has a second image resolution higher than the first image resolution. A common set of object masks is provided wherein object masks of objects () that are identified as being present in both image areas are merged. 1. A method of analyzing objects in images captured by a monitoring camera , comprising the steps of:receiving a first sequence of image frames having a first image resolution and covering a first image areareceiving a second sequence of image frames having a second image resolution higher than the first image resolution and covering a second image area being a portion of the first image area,detecting objects present in the first sequence of image framesdetecting objects present in the second sequence of image frames,providing a first set of object masks for objects detected in the first sequence of image frames,providing a second set of object masks for objects detected in the second sequence of image frames,identifying an object present in the first and the second sequence of image frames by detecting a first object mask in the first set of object masks at least partly overlapping a second object mask in the second set of object masks,merging the first and the second object mask into a third object mask by including data from the first object mask for parts present only in the first image area, and data from the second object mask for parts present in the second image area, and the first set of object masks excluding the first object mask,', 'the second set of object masks excluding the second object mask, and', 'the third object mask., 'providing a third set of object masks ...

Подробнее
14-11-2013 дата публикации

Image processing device, imaging device, image processing method

Номер: US20130301885A1
Принадлежит: Canon Inc

An image including a face is input (S 201 ), a plurality of local features are detected from the input image, a region of a face in the image is specified using the plurality of detected local features (S 202 ), and an expression of the face is determined on the basis of differences between the detection results of the local features in the region of the face and detection results which are calculated in advance as references for respective local features in the region of the face (S 204 ).

Подробнее
14-11-2013 дата публикации

Authentication card, authentication system, guidance method, and program

Номер: US20130301886A1
Автор: Yoshinori Koda
Принадлежит: NEC Corp

The present invention is an authentication card having: an imaging means that images the face of a person to be authenticated; and a guidance means that performs guidance that leads the person to be authenticated in a manner so that it is possible to capture an image from which at least the feature quantities of the face of the person to be authenticated necessary for authentication comparison can be extracted.

Подробнее
21-11-2013 дата публикации

Smile Detection Techniques

Номер: US20130308855A1
Автор: Li Jianguo
Принадлежит:

Techniques are disclosed that involve the detection of smiles from images. Such techniques may employ local-binary pattern (LBP) features and/or multi-layer perceptrons (MLP) based classifiers. Such techniques can be extensively used on various devices, including (but not limited to) camera phones, digital cameras, gaming devices, personal computing platforms, and other embedded camera devices. 1. A method , comprising:detecting a face in an image;determining one or more local binary pattern (LBP) features from the detected face; andgenerating a smile detection indicator from the one or more LBP features with a multi-layer perceptrons (MLP) based classifier.2. The method of claim 1 , further comprising identifying a plurality of landmark points on the detected face.3. The method of claim 2 , wherein the plurality of landmark positions indicate eye-corners and mouth-corners.4. The method of claim 1 , further comprising aligning a region of the image corresponding to the detected face.5. The method of claim 1 , further comprising normalizing a region of the image corresponding to the detected face.6. The method of claim 1 , wherein said determining the one or more LBP features comprises involves selecting the one or more LBP features from a plurality of LBP features; wherein said selection is based on a boosting training procedure.7. An apparatus claim 1 , comprising:an image source to provide an image;a smile detection module to detecting a face in an image, determine one or more local binary pattern (LBP) features from the detected face, and generate a smile detection indicator from the one or more LBP features with a multi-layer perceptrons (MLP) based classifier.8. The apparatus of claim 7 , wherein the smile detection module is to identify a plurality of landmark points on the detected face.9. The apparatus of claim 8 , wherein the smile detection module is to align a region of the image corresponding to the detected face.10. The apparatus of claim 7 , wherein ...

Подробнее
28-11-2013 дата публикации

Method of Improving Orientation and Color Balance of Digital Images Using Face Detection Information

Номер: US20130315488A1

A method of generating one or more new spatial and chromatic variation digital images uses an original digitally-acquired image which including a face or portions of a face. A group of pixels that correspond to a face within the original digitally-acquired image is identified. A portion of the original image is selected to include the group of pixels. Values of pixels of one or more new images based on the selected portion are automatically generated, or an option to generate them is provided, in a manner which always includes the face within the one or more new images. Such method may be implemented to automatically establish the correct orientation and color balance of an image. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images. 1. A method of digital image processing using face detection for achieving desired luminance parameters for a face , comprising: using a digital image acquisition device or external image processing device , or a combination thereof , that includes a processor that is programmed to perform the method , wherein the method comprises:identifying a group of pixels that correspond to a face within a digital image;generating in-camera, capturing or otherwise obtaining in-camera a collection of reference images including said face;tracking said face within said collection of reference images;identifying one or more sub-groups of pixels that correspond to one or more facial features of the face including at least one sub-group of pixels that substantially comprise one or two eyes, a mouth, a chin or a nose, or combinations thereof;determining initial luminance values of one or more luminance parameters of said pixels of the one or more sub-groups of pixels that substantially comprise one or two eyes, a mouth, a chin or a nose, or combinations thereof;determining at least one initial luminance parameter based on the initial luminance values; anddetermining ...

Подробнее
05-12-2013 дата публикации

SECURITY BY Z-FACE DETECTION

Номер: US20130322708A1
Автор: Heringslack Henrik
Принадлежит: SONY MOBILE COMMUNICATIONS AB

A method for identifying a person using a mobile communication device, having a camera unit adapted for recording three-dimensional (3D) images, by recording a 3D image of the person's face using the camera unit, performing face recognition on the 2D image data in the recorded 3D image to determine at least two facial points on the 3D image the of person's face, determining a first distance between the at least two facial points in the 2D image data, determining a second distance between the at least two facial points using the depth data of the recorded 3D image, determining a third distance between the at least two facial points using the first distance and the second distance, and identifying the person by comparing the determined third distance to stored distances in a database, wherein each of the stored distances are associated with a person. 1. A method for identifying a person using a mobile communication device having a camera unit adapted for recording a three-dimensional (3D) image , wherein said recorded 3D image comprises two-dimensional (2D) image data and depth data , said method comprises the steps:recording a 3D image of said person's face using said camera unit;performing face recognition on the 2D image data in said recorded 3D image to determine at least two facial points on said 3D image of said person's face;determining a first distance between said at least two facial points in said 2D image data;determining a second distance between said at least two facial points using said depth data of said recorded 3D image;determining a third distance between said at least two facial points using said first distance and said second distance; andidentifying said person by comparing said determined third distance to stored distances in a database, wherein each of said stored distances are associated with a person.2. The method according to claim 1 , wherein said determining of said second distance between said at least two facial points comprises; ...

Подробнее
05-12-2013 дата публикации

INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREFOR

Номер: US20130322770A1
Принадлежит: CANON KABUSHIKI KAISHA

An information processing apparatus includes an image input unit which inputs image data containing a face, a face position detection unit which detects, from the image data, the position of a specific part of the face, and a facial expression recognition unit which detects a feature point of the face from the image data on the basis of the detected position of the specific part and determines facial expression of the face on the basis of the detected feature point. The feature point is detected at a detection accuracy higher than detection of the position of the specific part. Detection of the position of the specific part is robust to a variation in the detection target. 1an input unit adapted to input image data containing a face;a first detection unit adapted to detect, from the image data, a position of a specific part of the face;a second detection unit adapted to detect a feature point of the face from the image data on the basis of the detected position of the specific part; anda determination unit adapted to determine facial expression of the face on the basis of the detected feature point,wherein said second detection unit has higher detection accuracy than detection accuracy of said first detection unit, and said first detection unit is robust to a variation in a detection target.. An information processing apparatus comprising: This application is a continuation of application Ser. No. 11/532,979, filed Sep. 19, 2006 the entire disclosure of which is hereby incorporated by reference.1. Field of the InventionThe present invention relates to an information processing apparatus and control method therefore, particularly to an image recognition technique.2. Description of the Related ArtConventionally, an object recognition (image recognition) technique is known, which causes an image sensing device to sense an object to acquire image data and calculates the position and orientation of the object by analyzing the image data.Japanese Patent Laid-Open No. 09- ...

Подробнее
19-12-2013 дата публикации

DEPTH-PHOTOGRAPHING METHOD OF DETECTING HUMAN FACE OR HEAD

Номер: US20130336548A1
Принадлежит:

A depth-photographing method of detecting human face or head includes the steps of using the specific light to illuminate the target under an environmental light, receiving and detecting the reflected light from the target with the detector, generating first depth detecting information corresponding to the depth of the target, turning off the specific light, detecting another reflected light from the target, generating second depth detecting information corresponding to the depth of the target, performing the detection/calculation process based on the first and second depth detecting information to generate the appearance of the target, determining if the appearance of the target represents a human face or head, and if yes, generating depth-photographing detection information used to cancel a lock state, thereby avoiding unintentionally entering power saving (or standby mode), speeding up entering the desired power saving or dynamically changing/adjusting the display content. 1. A depth-photographing method of detecting human face or head for cancelling a lock state , comprising steps of:using a specific light to illuminate a target under an environmental light for an illuminating period of time;receiving and detecting a reflected light from the target with at least one detector;generating first depth detecting information corresponding to depth of the target based on an intensity of the reflected light;turning off the specific light for a turn-off period of time;using the at least one detector to detect another reflected light from the target illuminated by only the environmental light;generating second depth detecting information corresponding to the depth of the target based on an intensity of the another reflected light;performing a detecting/calculating process to generate an appearance of the target based on the first depth detecting information and/or the second depth detecting information;inspecting the appearance of the target; andgenerating depth- ...

Подробнее
09-01-2014 дата публикации

IMAGE PROCESSING APPARATUS, METHOD THEREOF, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20140010415A1
Принадлежит:

An image processing apparatus comprises, a management unit configured to classify a face feature information of a face region of an object extracted from image data into a predetermined category in accordance with similarity, and manage the face feature information in a dictionary, a condition setting unit configured to set category determination conditions for classifying the face feature information into the category in accordance with individual information representing at least one of an age and sex of the object and a determination unit configured to determine, based on the category determination conditions set by the condition setting unit, a category to which the face feature information belongs in the dictionary. 1. An image processing apparatus comprising:a management unit configured to classify a face feature information of a face region of an object extracted from image data into a predetermined category in accordance with similarity, and manage the face feature information in a dictionary;a condition setting unit configured to set category determination conditions for classifying the face feature information into the category in accordance with individual information representing at least one of an age and sex of the object; anda determination unit configured to determine, based on the category determination conditions set by said condition setting unit, a category to which the face feature information belongs in the dictionary.2. The apparatus according to claim 1 , wherein said condition setting unit changes claim 1 , out of the determination conditions in accordance with the individual information claim 1 , a category similarity determination criterion used for category determination.3. The apparatus according to claim 1 , wherein said condition setting unit changes claim 1 , out of the category determination conditions in accordance with the individual information claim 1 , a maximum number of registration feature information to be registered in a ...

Подробнее
09-01-2014 дата публикации

COMMAND INPUT METHOD OF TERMINAL AND TERMINAL FOR INPUTTING COMMAND USING MOUTH GESTURE

Номер: US20140010417A1
Автор: HWANG Sungjae
Принадлежит:

A command input method of a terminal includes: acquiring an image including a user's face region through a camera; detecting a mouth region from the user's face region; inputting a command to the terminal or to an application being executed in the terminal if a mouth gesture of the mouth region is identical to an unlock gesture stored in the terminal. The user may make the same mouth gesture as a pre-set unlock gesture, or make a mouth gesture corresponding to an authentication message displayed on a display panel of the terminal. The command may be an unlock command for unlocking the terminal or the application or a command for executing a predetermined function while unlocking the terminal or the application. 1. A command input method of a terminal with a camera , comprising:acquiring an image including a user's face region through the camera;detecting a mouth region from the user's face region;inputting a command to the terminal or to an application being executed in the terminal if a mouth gesture of the mouth region is identical to an unlock gesture stored in the terminal.2. The command input method of claim 1 , after acquiring the image including the user's face region claim 1 , further comprising:detecting the user's face region from the image; anddetermining whether the user's face region is identical to an authorized user's face image stored in the terminal,wherein the detecting of the mouth region from the user's face region comprises detecting the mouth region if the user's face region is identical to the authorized user's face image.3. The command input method of claim 1 , wherein the mouth gesture is at least one gesture among a gesture of pronouncing at least one vowel claim 1 , a gesture of pronouncing at least one consonant claim 1 , a gesture of pronouncing a specific syllable claim 1 , a gesture of pronouncing a specific word claim 1 , and a gesture of pronouncing a specific sentence.4. The command input method of claim 1 , wherein the unlock ...

Подробнее
09-01-2014 дата публикации

Lip activity detection

Номер: US20140010418A1
Принадлежит: Hewlett Packard Development Co LP

Provided is a method of detecting lip activity. The method determines magnitude of optical flow in lip region and at least one non-lip region of a detected face. The ratio of magnitude of optical flow in lip region and at least one non-lip region is compared against a threshold. If the ratio is found to be greater than the threshold, lip activity of the detected face is recognized.

Подробнее
09-01-2014 дата публикации

IMAGE VERIFICATION DEVICE, IMAGE PROCESSING SYSTEM, IMAGE VERIFICATION PROGRAM, COMPUTER READABLE RECORDING MEDIUM, AND IMAGE VERIFICATION METHOD

Номер: US20140010419A1
Автор: IRIE Atsushi
Принадлежит: Omron Corporation

An image verification device that checks an input image obtained by photographing an object to be checked against a registered image database, wherein, in the registered image database, an amount of feature of an image obtained by photographing an object is registered as a registered image, and the registered image includes registered images registered with respect to a plurality of objects, has a verification score calculating unit that calculates a verification score serving as a score representing a degree of approximation between the objects indicated by the registered images and the object of the input image by using the amount of feature of the input image and the amounts of feature of the registered images, and a relative evaluation score calculating unit. 1. An image verification device that checks an input image obtained by photographing an object to be checked against a registered image database ,wherein, in the registered image database, an amount of feature of an image obtained by photographing an object is registered as a registered image, and the registered image includes registered images registered with respect to a plurality of objects, a verification score calculating unit that calculates a verification score serving as a score representing a degree of approximation between the objects indicated by the registered images and the object of the input image by using the amount of feature of the input image and the amounts of feature of the registered images;', 'a relative evaluation score calculating unit that calculates a relative evaluation score serving as a score representing a degree of approximation between one object registered in the registered image database and the object of the input image in comparison with the other objects;', 'an integrated score calculating unit that calculates an integrated score obtained by weighting the verification score and the relative evaluation score; and', 'an image verification unit that performs verification ...

Подробнее
16-01-2014 дата публикации

APPARATUS FOR RETRIEVING INFORMATION ABOUT A PERSON AND AN APPARATUS FOR COLLECTING ATTRIBUTES

Номер: US20140016831A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

A first acquisition unit is configured to acquire the image including a plurality of frames. A first extraction unit is configured to extract a plurality of persons from the frames, and to extract a plurality of first attributes from each of the persons. The first attributes feature each person. A second extraction unit is configured to extract a plurality of second attributes from a first person indicated by a user. The second attributes feature the first person. A retrieval unit is configured to retrieve information about a person similar to the first person from the persons, based on at least one of the second attributes as a retrieval condition. An addition unit is configured to, when at least one of the first attributes of a retrieved person by the retrieval unit is different from the second attributes, add the at least one of the first attributes to the retrieval condition. 1. An apparatus for retrieving information about an indicated person from an image , comprising:a first acquisition unit configured to acquire the image including a plurality of frames;a first extraction unit configured to extract a plurality of persons from the frames, and to extract a plurality of first attributes from each of the persons, the first attributes featuring each person;a second extraction unit configured to extract a plurality of second attributes from a first person indicated by a user, the second attributes featuring the first person;a retrieval unit configured to retrieve information about a person similar to the first person from the persons, based on at least one of the second attributes as a retrieval condition; andan addition unit configured to, when at least one of the first attributes of a retrieved person by the retrieval unit is different from the second attributes, add the at least one of the first attributes to the retrieval condition.2. The apparatus according to claim 1 , whereinthe first attributes and the second attributes respectively include at least one of ...

Подробнее
16-01-2014 дата публикации

Face recognition system and method

Номер: US20140016836A1
Автор: Avihu Meir Gamliel
Принадлежит: C True Ltd

Apparatus for face recognition, the apparatus comprising: a face symmetry verifier, configured to verify symmetry of a face in at least one image, according to a predefined symmetry criterion, and a face identifier, associated with the face symmetry verifier, and configured to identify the face, provided the symmetry of the face is successfully verified.

Подробнее
23-01-2014 дата публикации

Digitally-Generated Lighting for Video Conferencing Applications

Номер: US20140023237A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model. 1. A method comprising:detecting, via a processor, a head within an image;determining an orientation of the head;establishing a position of a virtual light source in the image based on the orientation of the head; andmodifying the image to have an illumination of the head using the virtual light source.2. The method of claim 1 , the method further comprising communicating the illumination of the head to a video conference system.3. The method of claim 1 , further comprising determining a characteristic of the head claim 1 , the characteristic being one of a facial feature and an ellipsoid representing the head.4. The method of claim 3 , wherein the facial feature comprises one of an eye claim 3 , a lip claim 3 , an ear claim 3 , a nose claim 3 , a cheek claim 3 , a chin claim 3 , and an eyebrow.5. The method of claim 1 , further comprising tracking movement of the head claim 1 , wherein establishing the position of the virtual light source comprises establishing the position based on the movement of the head.6. The method of claim 5 , wherein tracking the movement of the head yields a position information claim 5 , the method further comprising creating a three-dimensional model of the head based on the position information.7. The method of claim 1 , wherein the illumination of the head is by a ...

Подробнее
23-01-2014 дата публикации

APPARATUS AND METHOD FOR PROTECTING PRIVACY INFORMATION BASED ON FACE RECOGNITION

Номер: US20140023248A1

An apparatus and method protects leakage of privacy information by detecting a specific person using a face recognition technology from a video image stored in a video surveillance system and performing privacy masking or mosaic processing on a face of the specific person or faces of other people. 1. An apparatus for protecting privacy information , comprising:an image frame division unit configured to divide a search target video sequence into a plurality of image frames;a face detection unit configured to detect a face region from each of the image frames;a face recognition unit configured to perform face recognition by comparing face information of a face extracted from the face region with face information of a search target face and determining whether the extracted face is substantially the same as the search target face; anda privacy processing unit configured to distinguish between a first face determined to be substantially the same as the search target face and a second face determined not to be substantially the same as the search target face, and perform image processing to selectively mask a face region of the first face or that of the second face,wherein the second face is one of two or more faces except for the first face.2. The apparatus of claim 1 , further comprising:a person detection unit configured to detect a person region from each of the image frames.3. The apparatus of claim 2 , wherein the face detection unit is configured to detect the face region from the person region.4. The apparatus of claim 1 , wherein the privacy processing unit is configured to distinguish between the first face and the second face and selectively perform privacy masking or mosaic processing on the first or second face claim 1 , or a person region of the first or second face.5. The apparatus of claim 1 , wherein the privacy processing unit is configured to perform privacy masking or mosaic processing on the second face or a person region of the second face.6. The ...

Подробнее
06-02-2014 дата публикации

OBJECT SELECTION IN AN IMAGE

Номер: US20140036110A1
Принадлежит: Intel Corporation

Illustrative embodiments of methods, machine-readable media, and computing devices allowing object selection in an image are disclosed. In some embodiments, a method may include detecting one or more features of an object in a machine-readable image file, simulating an electrostatic charge distribution by assigning a virtual point charge to each of the one or more detected features, determining a virtual electric potential field resulting from the simulated electrostatic charge distribution, and selecting a portion of the object in the machine-readable image file that is bounded by an equipotential line in the virtual electric potential field. 125-. (canceled)26. A computing device comprising:a camera; andan image co-processor to (i) detect one or more features of an object in an image captured by the camera, (ii) simulate an electrostatic charge distribution by assigning a virtual point charge to each of the one or more detected features, (iii) determine a virtual electric potential field resulting from the simulated electrostatic charge distribution, and (iv) select a portion of the object in the captured image that is bounded by an equipotential line in the virtual electric potential field.27. The computing device of claim 26 , wherein to detect the one or more features of the object comprises to apply a cascade of classifiers based on Haar-like features to the captured image.28. The computing device of claim 26 , wherein to simulate the electrostatic charge distribution comprises to weight the virtual point charge assigned to each of the one or more detected features in response to pixel colors near the virtual point charge.29. The computing device of claim 26 , wherein the image co-processor is further to apply an alpha mask to the selected portion of the object.30. The computing device of claims 26 , wherein to determine the virtual electric potential field resulting from the simulated electrostatic charge distribution comprises to apply a Poisson solver to ...

Подробнее
06-02-2014 дата публикации

EYELID-DETECTION DEVICE, EYELID-DETECTION METHOD, AND RECORDING MEDIUM

Номер: US20140037144A1
Автор: Hiramaki Takashi
Принадлежит: AISIN SEIKI KABUSHIKI KAISHA

A lower eyelid search window (W) matching the pixels constituting the edge of a lower eyelid is transformed so that the lower eyelid search window (W) fits the pixels constituting the edge of the lower eyelid. Then, the position of the centroid of the transformed lower eyelid search window (W) is set as the lower eyelid reference position. Consequently, the lower eyelid reference position can be accurately set even if the lower eyelid search window (W) is different in shape from the edge of the lower eyelid. Then, it is possible to accurately detect the degree of opening of the eyes of a driver and thus accurately determine the degree of wakefulness of the driver. 1. An eyelid detection device , comprising:a extractor extracting the pixels of which the edge values satisfy given conditions from the pixels constituting an image of the eyes of a driver;a calculator calculating the evaluation value of the pixels overlapping with an window having a shape corresponding to the eyelids of the driver while scanning the image using the window;a transformer transforming the window at the position where the evaluation value is maximized to increase the evaluation value of the window; anda setter setting the reference positions of the eyelids according to the transformed window.2. The eyelid detection device according to claim 1 , whereinthe transformer detects the pixels situated near the ends of a group of pixels overlapping with the window and transforms the window so that the window overlaps with the detected pixels.3. The eyelid detection device according to claim 1 , whereinthe transformer scans a pixel search window in a first direction in which a group of pixels overlapping with the window is arranged to extract the pixels overlapping with the pixel search window, the pixel search window leading to a higher evaluation value in the second direction perpendicular to the first direction than the evaluation value in the second direction of the pixel, andtransforms the window ...

Подробнее
06-02-2014 дата публикации

IDENTITY RECOGNITION BASED ON MULTIPLE FEATURE FUSION FOR AN EYE IMAGE

Номер: US20140037152A1

A method for identity recognition based on multiple feature fusion for an eye image, which comprises steps of registering and recognizing, wherein the step of registering comprises: obtaining a normalized eye image and a normalized iris image, for a given registered eye image, and extracting a multimode feature of an eye image of a user to be registered, and storing the obtained multimode feature of the eye image as registration information in a registration database; and the step of recognizing comprises:obtaining a normalized eye image and a normalized iris image, for a given recognized eye image,extracting a multimode feature of an eye image of a user to be recognized,comparing the extracted multimode feature with the multimode feature stored in the database to obtain a matching score, and obtaining a fusion score by fusing matching scores at score level, and performing the multiple feature fusion identity recognition on the eye image by a classifier. The present invention recognizes identity by fusing multiple features of eye regions on a human face, and thus achieves high recognition accuracy and is suitable for applications of high security level. 1. A method for identity recognition based on multiple feature fusion for an eye image , which comprises steps of registering and recognizing , whereinthe step of registering comprises:obtaining a normalized eye image and a normalized iris image, for a given registered eye image, andextracting a multimode feature of an eye image of a user to be registered, and storing the obtained multimode feature of the eye image as registration information in a registration database; andthe step of recognizing comprises:obtaining a normalized eye image and a normalized iris image, for a given recognized eye image,extracting a multimode feature of an eye image of a user to be recognized,comparing the extracted multimode feature with the multimode feature stored in the database to obtain a matching score, and obtaining a fusion ...

Подробнее
20-02-2014 дата публикации

USING RELEVANCE FEEDBACK IN FACE RECOGNITION

Номер: US20140050374A1
Принадлежит: AOL INC.

Images are searched to locate faces that are the same as a query face. Images that include a face that is the same as the query face may be presented to a user as search result images. Images also may be sorted by the faces included in the images and presented to the user as sorted search result images. The user may provide explicit or implicit feedback regarding the search result images. Additional feedback may be inferred regarding the search result images based on the user-provided feedback, and the results may be updated based on the user-provided and inferred feedback. 125.-. (canceled)26. A method for recognizing faces within images , the method comprising:determining, by at least one processor, a set of search result images based on at least one query face, the set of search result images including a search result image that includes a representation of a face that is determined to match the query face;providing an indication of the set of search result images to a user;receiving an implicit indication of the accuracy of search result images based on the user's interaction with at least one of the provided search result images;determining, by the at least one processor, that a face in the at least one search result image matches the query face based on at least the implicit indication; andupdating the set of search result images based on the determination that the face in the at least one search result image matches the query face.27. The method of wherein determining a set of search result images comprises:creating a query feature vector for the query face;forming a set of search feature vectors by creating one or more search feature vectors for faces within a set of search faces;determining distances between the query feature vector and the search feature vectors;selecting one or more search feature vectors that are within a selected distance of the query face feature vector, anddesignating images that include faces that correspond to the selected search ...

Подробнее
20-02-2014 дата публикации

METHOD AND APPARATUS FOR DETECTING AND TRACKING LIPS

Номер: US20140050392A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Provided is a method of detecting and tracking lips accurately despite a change in a head pose. A plurality of lips rough models and a plurality of lips precision models may be provided, among which a lips rough model corresponding to a head pose may be selected, such that lips may be detected by the selected lips rough model, a lips precision model having a lip shape most similar to the detected lips may be selected, and the lips may be detected accurately using the lips precision model. 1. A lips detecting method comprising:estimating, by way of a processor, a head pose in an input image;selecting a lips rough model corresponding to the estimated head pose from among a plurality of lips rough models;executing an initial detection of lips using the selected lips rough model;selecting a lips precision model having a lip shape most similar to a shape of the initially detected lips from among a plurality of lips precision models; anddetecting the lips using the selected lips precision model.2. The method of claim 1 , wherein the plurality of lips rough models are obtained by training lip images of a first multi group as a training sample claim 1 , andlip images of a respective group of the first multi group are used as a training sample set and are used to train a corresponding lips rough model.3. The method of claim 2 , wherein the plurality of lips precision models are obtained by training lip images of a second multi group as a training sample claim 2 , andlip images of a respective group of the second multi group are used as a training sample set and are used to train a corresponding lips precision model.4. The method of claim 3 , wherein the lip images of the respective group of the second multi group are divided into a plurality of subsets based on a lip shape claim 3 ,the lips precision model is trained using the subsets, anda respective subset, of the plurality of subsets, is used as a training sample set and is used to train a corresponding lips precision ...

Подробнее
20-02-2014 дата публикации

METHOD FOR ON-THE-FLY LEARNING OF FACIAL ARTIFACTS FOR FACIAL EMOTION RECOGNITION

Номер: US20140050408A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method for determining a facial emotion of a user in the presence of a facial artifact includes detecting Action Units (AUs) for a first set of frames with the facial artifact; analyzing the AUs with the facial artifact after the detection; registering the analyzed AUs for a neutral facial expression with the facial artifact in the first set of frames; predicting the AUs in a second set of frames; and determining the facial emotion by comparing the registered neutral facial expression with the predicted AUs in the second set of frames. 1. A method for determining a facial emotion of a user in the presence of a facial artifact , the method comprising:detecting Action Units (AUs) for a first set of frames with the facial artifact;analyzing the AUs with the facial artifact after the detection;registering the analyzed AUs for a neutral facial expression with the facial artifact in the first set of frames;predicting the AUs in a second set of frames; anddetermining the facial emotion by comparing the registered neutral facial expression with the predicted AUs in the second set of frames.2. The method of claim 1 , wherein the analyzing AUs further comprises analyzing frequently occurring AUs over a period of time in a sequence of frames to register the analyzed AUs as the neutral facial expression.3. The method of claim 1 , wherein registering the analyzed AUs for a neutral facial expression comprises assuming the user is showing the neutral facial expression in the first set of frames.4. The method of claim 1 , wherein detecting the AUs further comprises localizing the face of the user and extracting features of the localized face.5. The method of claim 1 , wherein a weight of an Action Unit with a facial artifact is reduced if variations in AU of the second set of frames and the Action Unit with the facial artifact is detected.6. A non-transitory computer-readable recording medium storing a program to implement the method of .7. A system for determining a facial ...

Подробнее
27-02-2014 дата публикации

Connecting to an Onscreen Entity

Номер: US20140055553A1
Принадлежит: QUALCOMM INCORPORATED

A method for identifying unknown third parties appearing within video call data based on generated image characteristics data. A user's computing device and a participant computing device may exchange and render video call data in which the user's computing device may display the unknown third-party. The user's computing device may generate image characteristics data based on selected imagery. The user's computing device may compare the image characteristics data to stored contact information on the user's computing device to find a match, and may transmit the image characteristics data to the participant computing device for comparison with local stored information stored. The participant computing device may transmit a report message to the user's computing device indicating whether a match is found. In an embodiment, a server may transmit the facial data to other devices for comparison. In another embodiment, the user's computing device may request contact information from participant computing devices. 1. A method for a user computing device participating in a video call to identify an unknown third-party imaged within the video call , comprising:applying image recognition techniques to an image of the unknown third-party within the video call to generate image characteristics data within the user computing device;determining whether the generated image characteristics data matches data within a first contacts database within the user computing device, wherein the data within the first contacts database includes at least a first image;transmitting a message that requests information regarding the unknown third-party in response to determining that the generated image characteristics data does not match the data within the first contacts database of the user computing device;receiving a response message corresponding to data within a second contacts database of another device, wherein the data within the second contacts database includes at least a second image; ...

Подробнее
27-02-2014 дата публикации

Video Infrared Retinal Image Scanner

Номер: US20140055567A1
Автор: Dyer David S.
Принадлежит: Dyer Holdings, LLC

A method of scanning a retinal image includes providing a light source, emitting radiation from the light source toward a beam splitter, focusing the radiation with a focusing lens on a retina, collecting radiation reflected by the retina with a camera, producing an image signal representative of a plurality of images of the retina based on the collected radiation, selecting one of the plurality of images of the retina for display from the image signal, displaying the selected image of the retina on a display, comparing the selected image of the retina to at least one of a plurality of images of retinas stored in the database that matches the selected image of the retina, and displaying the one of the matching image of the retina on the display along with the selected image of the retina. 1. A retinal image scanner , comprising:an infrared light source;a beam splitter reflecting infrared radiation from the light source through one of a plurality of focusing lenses to a retina;a camera collecting radiation reflected by the retina through the beam splitter;an analog to digital convertor receiving a raw signal from the camera based on the collected radiation;the analog to digital convertor converting the raw signal to a digital signal;a streaming video converter processing the digital signal into a video signal; anda video monitor displaying an image of the retina based on the video signal;the retinal image scanner further comprising a video transmitter, the video transmitter transmitting the video signal to a computer over a network, the computer extracting a plurality of images from the video signal;a comparator comparing at least one of the plurality of images with at least one of a plurality of stored images; anda selector selecting one of the plurality of stored images that matches the one of the plurality of images.2. The retinal image scanner of claim 1 , wherein the video monitor comprises a high-resolution liquid crystal display screen.3. The retinal image ...

Подробнее
27-02-2014 дата публикации

Image recognition apparatus, an image recognition method, and a non-transitory computer readable medium thereof

Номер: US20140056490A1
Автор: Tomokazu Kawahara
Принадлежит: Toshiba Corp

According to one embodiment, an image recognition apparatus includes an acquisition unit, a detection unit, an extraction unit, a calculation unit, and a matching unit. The acquisition unit is configured to acquire an image. The detection unit is configured to detect a face region of a target person to be recognized from the image. The extraction unit is configured to extract feature data of the face region. The calculation unit is configured to calculate a confidence degree of the feature data, based on a size of the face region. The matching unit is configured to calculate a similarity between the target person and each of a plurality of persons by matching the feature data with respective feature data of the plurality of persons previously stored in a database, and to recognize the target person from the plurality of persons, based on the similarities and the confidence degree.

Подробнее
27-02-2014 дата публикации

SYSTEMS AND METHODS FOR ONLINE IDENTITY VERIFICATION

Номер: US20140056492A1
Автор: Geosimonian Armen
Принадлежит:

A system controlling online access to a study course verifies the identity of an individual taking a study course over a global computer network from a first computer at a node of the network. The first computer has a biometric identification program and communicates over the network with a second computer that is at a network node other than a node of the first computer. The second computer includes study program material. The first computer operates a biometric reader, which obtains a first set of biometric data from the individual and a second set of biometric data from the individual while access is granted to course material. The biometric identification program compares the first set of data with the second set of data to make a verification of the identity of the individual and communicates the verification to the second computer. 1. A computerized method for administering a program to an individual over a computer network , the method comprising:receiving a request from a web browser or computer program launched by a computer used by the individual for access to a web page comprising program material;obtaining a first image of the individual's biometric data using a biometric reader;providing the individual with access to program material over the network;imaging the individual's biometric data with the biometric reader while the individual is accessing the program and correlating the first image with the images obtained by imaging the individual's biometric data;inserting a value into a unique data field embedded in a web page wherein the value is an access value if the biometric reader is activated and wherein the value changes to a decline value if one or more of the images does not match the first image; andterminating access to the program material as soon as one or more of the images obtained by imaging the individual's biometric data does not match the first image.2. The method of claim 1 , further comprising generating a certificate of program ...

Подробнее
06-03-2014 дата публикации

Automatic Media Distribution

Номер: US20140064576A1
Принадлежит:

In accordance with some embodiments, wireless devices may automatically form ad hoc networks to enable more efficient sharing of media between the devices and in some cases more efficient facial recognition of captured media. In some embodiments, automatic story development may be implemented at the local level without involving backend servers. 1. A method comprising:using facial recognition technology to identify an individual depicted in a digital picture; andautomatically sending the picture to the identified individual.2. The method of including using light sensor information to compare said picture to a stored picture for facial recognition.3. The method of including using the distance of the individual from an imaging device to reduce a number of stored pictures that are analyzed for facial recognition.4. The method of including using the direction to the individual from the imaging device to reduce a number of stored pictures that are analyzed for facial recognition.5. The method of including automatically composing a textural story using information about the identity of an individual depicted in said picture.6. The method of including using sentence templates with instructions for how to fill in at least one of the subject or object of the sentence.7. The method of including developing a clickable list of candidate individuals who may be depicted in the picture claim 1 , and when an entry is clicked on claim 1 , tagging the picture with the selected identity.8. The method of further including establishing an ad hoc network of wireless devices.9. The method of including establishing identities of ad hoc network users including obtaining avatars for the users.10. The method of including using the identities of users in the group to define a search field for facial recognition.11. One or more computer readable media storing instructions to enable a computer to perform a method including:using facial recognition technology to identify an individual depicted in ...

Подробнее
06-03-2014 дата публикации

Image processing device and recording medium storing program

Номер: US20140064577A1
Принадлежит: BUFFALO INC

Image data in which persons are captured is accumulated in association with information indicating dates of taking the image data, the image data is subjected to person recognition processing for recognizing the captured persons, and image data in which a person of interest is captured is extracted. Actual age information of the person of interest for each piece of the extracted image data is obtained, and estimated age information of the person of interest which estimated age information is obtained by estimating the age of the captured person of interest from the image data is obtained. Age correcting information is generated on a basis of a result of statistical arithmetic operation on the actual age information calculated for each piece of the extracted image data and the estimated age information corresponding to each piece of the actual age information and estimated from the image data.

Подробнее
13-03-2014 дата публикации

Biometric Identification Systems and Methods

Номер: US20140072185A1
Автор: Dunlap David D., Hu Yulun
Принадлежит:

An exemplary embodiment of the present invention provides a method of verifying an identity of a person-to-be-identified using biometric signature data. The method comprises creating a sample database based on biometric signature data from a plurality of individuals, calculating a feature database by extracting selected features from entries in the sample database, calculating positive samples and negative sampled based on entries in the feature database, calculating a key bin feature using an adaptive boosting learning algorithm, the key bin feature distinguishing each of the positive samples and negative samples, and calculating a classifier from the key bin feature for use in identifying and authenticating a person-to-be-identified. 1. A method of identity verification using biometric signature data , comprising:creating a face sample database based on a plurality of acquired face samples, each of the plurality of acquired face samples including parameters for defining different postures and expressions;calculating a feature database by extracting selected features of entries in the face sample database;calculating positive samples and negative samples based on entries in the feature database;calculating a key bin feature using a learning algorithm, the key bin feature distinguishing each of the positive samples and negative samples; andcalculating a classifier from the key bin feature for use in identifying and authenticating an acquired face image of a person-to-be-identified.2. The method of claim 1 , wherein calculating a feature database comprises calculating at least one of LBP features and LTP features from entries in the face sample database.3. The method of claim 1 , wherein calculating positive samples comprises calculating a feature absolute value distance for a same position of any two different images from one person.4. The method of claim 1 , wherein calculating negative samples comprises calculating a feature absolute value distance for a same ...

Подробнее
20-03-2014 дата публикации

Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities

Номер: US20140079297A1
Принадлежит:

Here, we introduce Z-webs, including Z-factors and Z-nodes, for the understanding of relationships between objects, subjects, abstract ideas, concepts, or the like, including face, car, images, people, emotions, mood, text, natural language, voice, music, video, locations, formulas, facts, historical data, landmarks, personalities, ownership, family, friends, love, happiness, social behavior, voting behavior, and the like, to be used for many applications in our life, including on the search engine, analytics, Big Data processing, natural language processing, economy forecasting, face recognition, dealing with reliability and certainty, medical diagnosis, pattern recognition, object recognition, biometrics, security analysis, risk analysis, fraud detection, satellite image analysis, machine generated data analysis, machine learning, training samples, extracting data or patterns (from the video, images, and the like), editing video or images, and the like. Z-factors include reliability factor, confidence factor, expertise factor, bias factor, and the like, which is associated with each Z-node in the Z-web. 1. A method for recognition of faces from a still image or video frame , said method comprising:receiving a still image or video frame through an input interface;preprocessing said still image or video frame;recognizing a first class of image for said still image or video frame;if said first class of image for said still image or video frame comprises face or human, then sending said still image or video frame to a face recognizer module;said face recognizer module accessing a first basis function from a first library of basis functions, stored in a first basis function storage, corresponding to a first component of face;said face recognizer module accessing a second basis function from a second library of basis functions, stored in a second basis function storage, corresponding to a second component of face;a computing processor applying said first basis function ...

Подробнее
20-03-2014 дата публикации

Digital Image Search System And Method

Номер: US20140079298A1
Принадлежит: Facedouble, Inc.

A method and system for of identifying an unknown individual from a digital image is disclosed herein. In one embodiment, the present invention allows an individual to photograph a facial image an unknown individual, transfer that facial image to a server for processing into a feature vector, and then search social networking Web sites to obtain information on the unknown individual. The Web sites comprise myspace.com, facebook.com, linkedin.com, www.hi5.com, www.bebo.com, www.friendster.com, www.igoogle.com, netlog.com, and orkut.com. A method of networking is also disclosed. A method for determining unwanted individuals on a social networking website is also disclosed. 1. A system for utilizing facial recognition technology for identifying an unknown individual from a digital image , the system comprising:a server configured to acquire an unknown facial image of an individual transmitted from a sender over a network to the server, the server configured to analyze the facial image to determine if the unknown facial image is acceptable, the server configured to process the unknown facial image to create a processed image, the server configured to compare the processed image to a plurality of database processed images, the server configured to match the processed image to a database processed image of the plurality of database processed images to create matched images, wherein the database processed image is a facial image of the individual from the individual's Web page of a Web site, the Web page containing personal information of the individual and a uniform resource locator for the Web page, and the server configured to transmit the database processed image, the personal information of the individual and the uniform resource locator for the Web page to the sender over the network.2. The system according to wherein the Web site is a publicly available Web site.3. The system according to wherein the personal information of the individual comprises the individual's ...

Подробнее
20-03-2014 дата публикации

PERSON RECOGNITION APPARATUS AND METHOD THEREOF

Номер: US20140079299A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to one embodiment, an apparatus includes input unit, detecting unit, extraction unit, storage, selection unit, determination unit, output unit, and setting unit. The selection unit selects operation or setting modes. In operation mode, it is determined whether captured person is preregistered person. In setting mode, threshold for the determination is set. The determination unit determines, as registered person and when operation mode is selected, person with degree of similarity between extracted facial feature information and stored facial feature information of greater than or equal to threshold. The setting unit sets, when setting mode is selected, threshold based on first and second degrees of similarity. First degree of similarity is degree of similarity between facial feature information of the registered person and the stored facial feature information. Second degree of similarity is degree of similarity between facial feature information of person other than registered person and stored facial feature information. 1. A person recognition apparatus comprising:an image input unit that receives image data captured by a camera;a face detecting unit that detects a face region representing a face of a person from the received image data;a facial feature information extraction unit that extracts facial feature information indicating a feature of the face of the person from the detected face region;a facial feature information storage unit that stores therein facial feature information of a person by each person; in the operation mode, it is determined whether the person captured by the camera is a registered person who has been preregistered, and', 'in the setting mode, a threshold value to be used upon the determination is set;, 'a mode selection unit that selects one of an operation mode and a setting mode, wherein'}a person determination unit that determines, as the registered person and when the operation mode is selected, a person with a degree of ...

Подробнее
27-03-2014 дата публикации

Method And System For Attaching A Metatag To A Digital Image

Номер: US20140086457A1
Принадлежит: Facedouble, Inc.

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database. 1. A system for organizing a plurality of digital photos , the system comprising:a network;a database comprising a first plurality of photos of an image of an individual;a server engine for processing the first plurality of photos to generate a feature vector for the image of the individual;a second plurality of photos located on a social networking web site;wherein the server engine is configured to analyze the second plurality of photos to determine if an image of the individual is present in a photo of the second plurality of photos, the analysis comprising determining if an image in each of the photos of the second plurality of photos matches the feature vector for the individual;wherein the server engine is configured to identify each of the photos of the second plurality of photos having an image of the individual to create a third plurality of photos; andwherein the server engine is configured to tag each of the photos of the third plurality of photos to identify the image of the individual in each of the third plurality of photos.2. The system according to wherein the step of tagging comprises listing a URL or URI for each of the photos of the third plurality of photos in a database.3. The system according to wherein the step of tagging comprises inserting a tag code on each of the photos of the third plurality of photos.4. The system according to wherein the image of the individual is a facial image of the individual.5. The system according to wherein the social networking web site is facebook.com and the tag code is a ...

Подробнее
03-04-2014 дата публикации

Method And System For Attaching A Metatag To A Digital Image

Номер: US20140093141A1
Принадлежит: Facedouble, Inc.

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database. 1. A method , for tagging an image of an individual , comprising:storing on a server supporting a service usable by multiple users through respective remote user computing devices accessing the service over a distributed network a plurality of reference photos comprising identified facial images of individual users of the service and, for each of a plurality of identified facial images, an identity of the individual and at least one reference feature vector generated from the facial images to provide a plurality of stored reference feature vectors, the server comprising at least one processor that accesses at least one storage media and being programmed with executable instructions;processing, by the server, an unknown facial image of an individual in a subject photo to generate a subject feature vector for the individual in the subject photo;determining, by the server, coordinates defining a position of the unknown facial image in the subject photo;determining, by the server, an identity of the unknown facial image, wherein the determining comprises comparing the subject feature vector to one or more stored reference feature vectors using a matching algorithm; andupon determining an identity of the unknown facial image, tagging, by the server, the subject photo to identify the facial image of the individual,wherein the tagging comprises storing in a storage media the coordinates defining a position of the facial image in the subject photo and an identifier for the individual, the coordinates and identifier for the individual being ...

Подробнее
03-04-2014 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Номер: US20140093142A1
Принадлежит: NEC Corporation

Disclosed is an information processing apparatus capable of performing face recognition with high accuracy. An information processing apparatus, comprising: generation means for generating, based on an original facial image, a plurality of facial images, each of the facial images corresponds to the original facial image facing to a different direction each other; feature value extraction means for extracting feature values of the facial images based on the plurality of facial images generated by the generation means; feature value synthesis means for generating a synthesized feature value by synthesizing the feature values which are extracted by the feature value extraction means; and recognition means for performing face recognition based on the synthesized feature value. 1. An information processing apparatus , comprising:a generation unit which generates, based on an original facial image, a plurality of facial images, each of the facial images corresponds to the original facial image is facing to a different direction each other;a feature amount extraction unit which extracts feature amounts of the facial images based on the plurality of facial images generated by the generation unit;a feature amount synthesis unit which generates a synthesized feature amount by synthesizing the feature amounts which are extracted by the feature amount extraction unit; anda recognition unit which performs facial recognition based on the synthesized feature amount.2. The information processing apparatus according to claim 1 , further comprising:a feature amount projection unit which reduces an amount of information of the synthesized feature amount by performing a projection conversion to the synthesized feature amount.3. The information processing apparatus according to claim 1 , further comprising:a feature amount correction unit which performs correction so as to decrease a feature amount of a face area, which has low accuracy in the plurality of generation, in the plurality ...

Подробнее
03-04-2014 дата публикации

PHOTO SHARING SYSTEM WITH FACE RECOGNITION FUNCTION

Номер: US20140095626A1
Автор: Hsi Chen-Ning
Принадлежит: PRIMAX ELECTRONICS LTD.

A photo sharing system with a face recognition function includes a photo browser and a photo sharing platform. The photo browser allows a user to browse plural photos, and includes a user interface that shows the plural photos; a photo classification module that is activated to automatically classify the plural photos into groups according to face images contained in the plural photos and identified by the face recognition function while showing the groups of photos on the user interface for selection; and a photo transmission module that is activated to automatically pack and transmit a selected one of the groups of photos to a photo sharing platform in the cloud. The photo sharing platform includes a photo file management module that manages at least a photo folder that stores the selected group of photos. 1. A photo sharing system with a face recognition function , the photo sharing system comprising: a user interface that shows the plural photos;', 'a photo classification module that is activated to automatically classify the plural photos into groups according to face images contained in the plural photos and identified by the face recognition function while showing the groups of photos on the user interface for selection; and', 'a photo transmission module that is activated to automatically pack and transmit a selected one of the groups of photos to a photo sharing platform; and, 'a photo browser included in an electronic device that allows a user to browse plural photos by operating the electronic device, wherein the photo browser comprises 'a photo file management module that manages at least a photo folder that stores the selected group of photos received from the electronic device to be accessible by a specified receiver.', 'the photo sharing platform included in a cloud server, wherein the cloud server is in communication with the electronic device through an internet, and the photo sharing platform comprising2. The photo sharing system according to ...

Подробнее
10-04-2014 дата публикации

AUTHENTICATION APPARATUS, AUTHENTICATION METHOD, AND PROGRAM

Номер: US20140099005A1
Автор: Mogi Hideaki
Принадлежит: SONY CORPORATION

A face authentication procedure is performed on a face detected in a visible light image of a scene, and correctness of an authentication determination of the face authentication procedure is verified by comparing the visible light image to an infrared light image of the same scene. The verification may be performed by comparing the luminance and/or the size of an eye region in the visible light image to the luminance and/or the size of the eye region in the infrared light image. 1. An electronic device , comprising:an imaging section comprising at least one image pickup unit, the imaging section being configured to image a scene and to generate a visible light image of the scene and an infrared light image of the scene;a face authentication unit configured to perform a face authentication procedure on a face detected in the visible light image;a verification unit configured to check an authentication determination of the face authentication unit by comparing the visible light image and the infrared light image.2. The electronic device of claim 1 ,wherein the verification unit is configured to check the authentication determination of the face authentication unit by comparing luminance data corresponding to an eye region in the visible light image with luminance data corresponding to an eye region in the infrared light image.3. The electronic device of claim 2 ,wherein the verification unit is configured to indicate that an authentication determination of the face authentication unit is suspect when a difference between the luminance data corresponding to the eye region in the visible light image and the luminance data corresponding to the eye region in the infrared light image is small.4. The electronic device of claim 3 ,wherein the luminance data corresponding to the eye region in the visible light image and the luminance data corresponding to the eye region in the infrared light image comprise binarized data.5. The electronic device of claim 2 , wherein the ...

Подробнее
04-01-2018 дата публикации

Biological information detection device using second light from target onto which dots formed by first light are projected

Номер: US20180000359A1
Автор: Hisashi Watanabe

A biological information detection device includes a light source, an image capturing device, and one or more arithmetic circuits. The light source projects dots formed by light onto a target including a living body. The image capturing device includes photodetector cells and generates an image signal representing an image of the target onto which the dots are projected. The one or more arithmetic circuits detect a portion corresponding to at least a part of the living body in the image by using the image signal and calculate biological information of the living body by using image signal of the portion.

Подробнее
02-01-2020 дата публикации

RESPIRATOR FITTING DEVICE AND METHOD

Номер: US20200001124A1
Автор: GUGINO Michael
Принадлежит:

A system and method for automated respirator fit testing by comparing three-dimensional (3D) images are disclosed. An example embodiment is configured to: obtain at least one three-dimensional facial image of an individual at an initial visit (Visit X); capture at least one current 3D facial image of the individual at a subsequent visit (Visit X+n); convert the Visit X image and the Visit X+n image to numerical data for computation and analysis; identify reference points in the Visit X data and the Visit X+n data; determine if the Visit X data and the Visit X+n data is sufficiently aligned; determine if any differences between the VISIT X data and the VISIT X+n data are greater than a pre-defined set of Allowable Deltas (ADs); and record a pass status if the differences between the VISIT X data and the VISIT X+n data are not greater than the pre-defined ADs. 1. A method for performing automated respirator mask fit testing , the method comprising:obtaining, with one or more processors, at least one initial two-dimensional (2D) or three-dimensional (3D) facial image of an individual from an initial respirator mask fitting visit;obtaining, with the one or more processors, at least one current 2D or 3D facial image of the individual from a subsequent respirator mask fitting visit;converting, with the one or more processors, the initial facial image and the current facial image to numerical initial visit data and subsequent visit data for analysis, the initial visit data and the subsequent visit data representative of facial features, facial dimensions, and/or facial locations on the face of the individual;identifying, with the one or more processors, facial reference points in the initial visit data and the subsequent visit data;determining, with the one or more processors, whether the facial reference points in the initial visit data and the subsequent visit data meet alignment criteria; and generating, with the one or more processors, a mask fit pass indication ...

Подробнее
04-01-2018 дата публикации

ROBOT CONTROL USING GESTURES

Номер: US20180001480A1
Автор: Liu Xinmin, MAO Yinian
Принадлежит:

A method and a device for operating a robot are provided. According to an example of the method, information of a first gesture is acquired from a group of gestures of an operator, each gesture from the group of gestures corresponding to an operation instruction from a group of operation instructions. A first operation instruction from the group of operation instructions is obtained based on the acquired information of the first gesture, the first operation corresponding to the first gesture. The first operation instruction is executed. 1. A method of operating a robot , comprising:acquiring information of a first gesture from a group of gestures of an operator, each gesture from the group of gestures corresponding to an operation instruction from a group of operation instructions;obtaining, based on the acquired information of the first gesture, a first operation instruction from the group of operation instructions, the first operation instruction corresponding to the first gesture; andexecuting the first operation instruction.2. The method according to claim 1 , wherein acquiring the information of the first gesture of the operator comprises:capturing an image using a camera;identifying a Region Of Interest (ROI) from the captured image;determining whether the ROI indicates the operator; andacquiring, based on the captured image, the information of the first gesture if the ROI indicates the operator.3. The method according to claim 2 , wherein determining whether the ROI indicates the operator comprises:calculating a similarity between face feature information of the ROI and pre-configured face feature information of the operator; anddetermining that the ROI indicates the operator if the calculated similarity is greater than a predetermined threshold.4. The method according to claim 2 , wherein the image is a first image claim 2 , the method further comprises capturing a second image using the camera claim 2 , the determining whether the ROI indicates the operator ...

Подробнее
07-01-2021 дата публикации

VEHICULAR IN-CABIN FACIAL TRACKING USING MACHINE LEARNING

Номер: US20210001862A1
Принадлежит: AFFECTIVA, INC.

Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle. 1. A computer-implemented method for facial analysis comprising:collecting in-cabin sensor data of a vehicle interior, wherein the in-cabin sensor data includes images of the vehicle interior;determining a set of seating locations for the vehicle interior, based on the images;scanning the set of seating locations for performing facial detection for each of the seating locations using a facial detection model;manipulating a view of a detected face, based on a geometry of the vehicle interior; andanalyzing cognitive state data of the detected face, based on additional images of the detected face, using the view that was manipulated.2. The method of further comprising promoting the cognitive state data to a using application.3. The method of further comprising providing vehicle manipulation information to the vehicle from the using application.4. The method of wherein the using application uses network connectivity remote from the vehicle to provide the manipulation information.5. The method of wherein the manipulation ...

Подробнее
06-01-2022 дата публикации

BIOMETRIC BASED SELF-SOVEREIGN INFORMATION MANAGEMENT

Номер: US20220004610A1
Принадлежит:

The present teaching relates to method, system, medium, and implementations for authenticating a user. A first request is received to set up authentication information with respect to a user, wherein the first request specifies a type of information to be used for future authentication of the user. It is determined whether the type of information related to the user poses risks based on a reverse information search result. The type of information for being used for future authentication of the user is rejected when the type of information is determined to pose risks. 1. A method , implemented on a machine having at least one processor , storage , and a communication platform for authenticating a user , comprising:receiving a first request to set up authentication information with respect to a user, wherein the authentication information specifies a type of information to be used for future authentication of the user; searching information related to the user from at least one accessible source, and', 'determining that the searched information comprises the type of information to be used for future authentication of the user; and, 'performing, in response to receiving the first request, a reverse information search byadjusting, based on the searched information comprising the type of information to be used for future authentication of the user, the authentication information with respect to the user.2. The method of claim 1 , further comprising:determining, based on the reverse information search, whether the type of information related to the user poses risks.3. The method of claim 2 , wherein the step of adjusting comprises:if the type of information related to the user poses risks, rejecting the type of information for being used for future authentication of the user.4. The method of claim 2 , wherein the step of adjusting comprises:if the type of information related to the user poses risks, replacing the type of information with another type of information ...

Подробнее
06-01-2022 дата публикации

AUTHENTICATION SYSTEM AND AUTHENTICATION METHOD

Номер: US20220004612A1
Принадлежит: Glory Ltd.

An authentication system includes user information acquisition circuitry configured to acquire user information of a user, the user information including image information of the user or voice information of the user; authentication information extraction circuitry configured to extract, from the user information, authentication information corresponding to a plurality of types of authentication; and authentication circuitry configured to perform an authentication procedure, using the authentication information, to authenticate the user. 1. An authentication system , comprising: user information acquisition circuitry configured to acquire user information of a user , the user information including image information of the user or voice information of the user;authentication information extraction circuitry configured to extract, from the user information, authentication information corresponding to a plurality of types of authentication; andauthentication circuitry configured to perform an authentication procedure, using the authentication information, to authenticate the user.2. The authentication system according to claim 1 , wherein the authentication information extraction circuitry extracts claim 1 , as the authentication information claim 1 , information including a face image of the user claim 1 , a voice of the user claim 1 , a password that the user has uttered claim 1 , and/or a degree of matching between the face and the voice of the user.3. The authentication system according to claim 1 , further comprising processing circuitry configured to control a process related to acquisition of the authentication information claim 1 , based on the user information.4. The authentication system according to claim 3 , wherein the processing circuitry is further configured to control a display to display a password in a case that a face image is acquired as the user information.5. The authentication system according to claim 4 , wherein the processing circuitry is ...

Подробнее
06-01-2022 дата публикации

CARDIAC MONITORING SYSTEM

Номер: US20220004658A1
Принадлежит:

An identification system including a first biometric identifier, a second biometric identifier, a first cardiac identifier logically related to the first biometric identifier, a second cardiac identifier logically related to the second biometric identifier, where the identity of a user is verified using the biometric identifiers and the cardiac identifiers. 1. An identification system including:a first biometric identifier;a second biometric identifier;a first cardiac identifier logically related to the first biometric identifier;a second cardiac identifier logically related to the second biometric identifier,wherein,the identity of a user is verified using the biometric identifiers and the cardiac identifiers.2. The system of claim 1 , wherein the first biometric identifier is gathered simultaneously with the first cardiac identifier.3. The system of claim 1 , wherein the second biometric identifier is gathered simultaneously with the second cardiac identifier.4. The system of claim 1 , wherein the first biometric identifier is one of a fingerprint claim 1 , a facial feature or a iris pattern.5. The system of claim 1 , wherein the second is one of a fingerprint claim 1 , a facial feature or a iris pattern.6. The system of claim 1 , wherein the first cardiac identifier is at least one point on an electrocardiogram.7. The system of claim 6 , wherein the second cardiac identifier is the least one point on an electrocardiogram.8. The system of claim 1 , wherein the first and second cardiac identifiers are normalized.9. The system of claim 8 , wherein the normalized cardiac identifiers are logically related to the first biometric identifier and the second biometric identifier.10. The system of claim 1 , wherein the biometric and cardiac identifiers are gathered from a mobile communication device.11. A method of identifying a user of a device claim 1 , the method including the steps of:gathering a first biometric identifier;gathering a second biometric identifier; ...

Подробнее
06-01-2022 дата публикации

PARTICIPANT IDENTIFICATION FOR BILL SPLITTING

Номер: US20220005045A1
Принадлежит: Capital One Services, LLC

Disclosed herein are system, method, and computer program product embodiments for providing recommendations for splitting bills. The approaches disclosed include the ability to obtain information about a bill to be split (such as a photo of the bill), and then use several machine learning models to determine the ‘who,’ ‘what,’ and ‘where’ of the underlying transaction. In particular, machine learning models described herein are used to perform facial recognition of a ‘selfie’ taken when a transaction was made against social media accounts to determine participants of the transaction. The machine learning models may also identify expected pricing from data about a merchant associated with the transaction, and expected amounts for each participant based on the expected pricing. 1. A computer implemented method , comprising:receiving, by one or more computing devices, transaction data corresponding to a transaction;retrieving, by the one or more computing devices, a photograph associated with the transaction, the photograph including an image of participants in the transaction;executing, by the one or more computing devices, a plurality of machine learning models to identify the participants in the transaction using facial recognition based on the image, and an expected individual allocation associated with the transaction based on a location associated with the transaction and the transaction data;calculating, by the one or more computing devices, transaction split information for the transaction comprising an individual allocation for the participants in the transaction based on the expected individual allocation; andproviding, by the one or more computing devices, the transaction split information for confirmation and assessment of the individual allocation to the participants in the transaction.2. The computer implemented method of claim 1 , further comprising:retrieving, by the one or more computing devices, historical transaction preferences associated with the ...

Подробнее
04-01-2018 дата публикации

SHELF SPACE ALLOCATION MANAGEMENT DEVICE AND SHELF SPACE ALLOCATION MANAGEMENT METHOD

Номер: US20180002109A1
Автор: YAMASHITA Nobuyuki
Принадлежит:

A shelf space allocation management device manages products allocated on shelves in a store by use of an imaging device. The shelf space allocation management device acquires an image including a position assumed to be changed in allocation status of each product on each shelf; it determines whether each product reflected in the image matches one of pre-recorded images, thus executing a product allocation inspection. Herein, the shelf space allocation management device specifies a position at which a person causes any change in the allocation status of each product on each shelf, and therefore it may control the imaging device to capture an image including the position. It is possible to carry out a product allocation inspection for each period determined in advance depending on the type of each product, or it is possible to carry out a product allocation inspection being triggered by a customer purchasing each product. 1. A shelf space allocation management device for managing products allocated on a shelf , comprising:an image acquisition part configured to acquire an image including a position assumed to be changed in an allocation status of each product on the shelf;an allocation status determination part configured to determine whether a type and an allocation status of each product reflected in the image match a predetermined type and a predetermined allocation status of each product; andan execution determination part configured to execute a product allocation inspection based on a determination result of the allocation status determination part.2. The shelf space allocation management device according to claim 1 , further comprising a position specifying part configured to specify the position assumed to be changed in the allocation status of each product on the shelf based on the image claim 1 , wherein the image acquisition part captures the image coving a predetermined scope based on the position specified by the position specifying part.3. The shelf ...

Подробнее
06-01-2022 дата публикации

SYSTEM AND METHOD OF GENERATING FACIAL EXPRESSION OF A USER FOR VIRTUAL ENVIRONMENT

Номер: US20220005246A1
Принадлежит:

The present invention relates to a method of generating a facial expression of a user for a virtual environment. The method comprises obtaining a video and an associated speech of the user. Further, extracting in real-time at least one of one or more voice features and one or more text features based on the speech. Furthermore, identifying one or more phonemes in the speech. Thereafter, determining one or more facial features relating to the speech of the user using a pre-trained second learning model based on the one or more voice features, the one or more phonemes, the video and one or more previously generated facial features of the user. Finally, generating the facial expression of the user corresponding to the speech for an avatar representing the user in the virtual environment. 1. A method of generating a facial expression of a user for a virtual environment , the method comprises:obtaining, by a computing system, a video and an associated speech of the user;extracting in real-time, by the computing system, at least one of one or more voice features and one or more text features based on the speech of the user;identifying in real-time, by the computing system, one or more phonemes in the speech using a pre-trained first learning model based on at least one of the one or more voice features and the one or more text features;determining in real-time, by the computing system, one or more facial features relating to the speech of the user using a pre-trained second learning model based on the one or more voice features, the one or more phonemes, the video and one or more previously generated facial features of the user; andgenerating in real-time, by the computing system, the facial expression of the user corresponding to the speech for an avatar representing the user in the virtual environment based on the one or more facial features.2. The method as claimed in claim 1 , wherein obtaining the video and the associated speech comprises one of:receiving the video ...

Подробнее
03-01-2019 дата публикации

SHELF SPACE ALLOCATION MANAGEMENT DEVICE AND SHELF SPACE ALLOCATION MANAGEMENT METHOD

Номер: US20190002201A1
Автор: YAMASHITA Nobuyuki
Принадлежит:

A shelf space allocation management device manages products allocated on shelves aligned in a store by use of an image captured by an imaging device. The shelf space allocation management device acquires an image including a position assumed to be changed in allocation status of each product on each shelf, it determines whether the type and the allocation status of each product reflected in the image match the predetermined type and the predetermined allocation status; then, it determines whether to execute a product allocation inspection based on the determination result. Herein, the shelf space allocation management device specifies a position at which a person conducts a behavior to cause any change in the allocation status of each product on each shelf, and therefore it may control the imaging device to capture an image including the position. It is possible to carry out a product allocation inspection for each period determined in advance depending on the type of each product, or it is possible to carry out a product allocation inspection being triggered by a customer purchasing each product. 111-. (canceled)12. A shelf space allocation management device for managing products allocated on a shelf , the shelf space allocation management device comprising:a memory storing instructions; andone or more processors coupled to the memory, wherein the one or more processors are configured to execute the instructions to:acquire a first image taken by a mobile imaging device that moves behind a person under an automatic tracking mode;determine whether a person captured in the first image has performed a predetermined action on the shelf;specify position information of the shelf for a product allocation inspection based on a determination result of an action of the person;change the mode from the automatic tracking mode to a position control mode after specifying the position information of the shelf;move the mobile imaging device to an image-capture position based on ...

Подробнее
01-01-2015 дата публикации

DEFORMABLE EXPRESSION DETECTOR

Номер: US20150003672A1
Принадлежит: QUALCOMM INCORPORATED

A method for deformable expression detection is disclosed. For each pixel in a preprocessed image, a sign of a first directional gradient component and a sign of a second directional gradient component are combined to produce a combined sign. Each combined sign is coded into a coded value. An expression in an input image is detected based on the coded values. 1. A method for deformable expression detection , comprising:combining, for each pixel in a preprocessed image, a sign of a first directional gradient component and a sign of a second directional gradient component to produce a combined sign;coding each combined sign into a coded value; anddetecting an expression in an input image based on the coded values.2. The method of claim 1 , further comprising preprocessing the input image to produce the preprocessed image claim 1 , comprising:aligning an input image based on a region of interest (ROI);cropping the ROI in the input image;scaling the ROI; andequalizing a histogram of the ROI.3. The method of claim 1 , wherein the directional gradient components are orthonormal.4. The method of claim 3 , wherein the directional gradient components are vertical and horizontal directional gradient components or 45-degree and 135-degree directional gradient components.5. The method of claim 1 , wherein the coding comprises coding each combined sign into a coded value based on the signs of the directional gradient components without determining the value of the magnitude of the directional gradient components.6. The method of claim 1 , wherein the expression comprises smiling claim 1 , blinking or anger.7. The method of claim 1 , wherein the detecting an expression comprises classifying a feature vector using a machine learning algorithm.8. The method of claim 7 , wherein the machine learning algorithm is a Support Vector Machines (SVM) algorithm claim 7 , a boosting algorithm or a K-Nearest Neighbors (KNN) algorithm.9. The method of claim 1 , further comprising updating a ...

Подробнее
01-01-2015 дата публикации

LIVENESS DETECTION

Номер: US20150003692A1
Автор: CAVALLINI Alessio
Принадлежит:

The present disclosure concerns a method of verifying the presence of a living face in front of a camera (), the method including: capturing by said camera a sequence of images of a face; detecting a plurality of features of said face in each of said images; measuring parameters associated with said detected features to determine whether each of a plurality of liveness indicators is present in said images; determining whether or not said face is a living face based on the presence in said images of a combination of at least two of said liveness indicators. 1. (canceled)2. A computer-implemented method comprising:receiving a sequence of images of a face;for each facial feature of a plurality of facial features of the face, obtaining a feature score that reflects an amount of motion exhibited by the facial feature in the sequence of images;for each subset of multiple different subsets of the plurality of facial features:aggregating the respective scores associated with the features of the subset, anddetermining whether the aggregated score for the subset satisfies a liveness criteria associated with the subset; andclassifying the face as a reproduction of a face based at least on determining that none of multiple different subsets has an aggregated score that satisfies the liveness criteria associated with the respective subset.3. The method of claim 2 , wherein the feature score is a binary indicator that indicates whether or not the associated facial feature indicates liveness.4. The method of claim 3 , wherein the binary indicator that indicates whether or not the associated facial feature indicates liveness is equal to one if a parameter score for the facial feature exceeds a threshold associated with the facial feature claim 3 , and is equal to zero if the parameter score does not exceed the threshold associated with the facial feature.5. The method of claim 4 , wherein the parameter score for the facial feature is equal to a total number of images in the ...

Подробнее
06-01-2022 дата публикации

METHOD AND APPARATUS FOR SETTING GEOFENCE

Номер: US20220007130A1
Принадлежит:

A method and apparatus for creating a geofence are provided herein. Unique geofences are created on a per-person basis, and based on a prediction of how likely a person is to wander. In one embodiment, the geofence for each individual is centered on a supervisor. For each individual being monitored, the geofence has an area that is inversely proportional to how likely a person is to wander. In this way, individuals that are more likely to wander will have a geofence that covers a smaller area than those who are not as likely to wander. 1. An apparatus comprising:a network interface configured to receive a video feed of a plurality of individuals; analyze the video feed to determine an activity level for each of the plurality of individuals; and', 'determine a geofence for each of the plurality of individuals, wherein the geofence for an individual encompasses an area inversely proportional to the activity level;, 'logic circuitry configured to;'}wherein the logic circuitry determines the geofence for each of the plurality of individuals by determining a first geofence encompassing a first area for a first individual, and determining a second geofence encompassing a second area for a second individual, and centering the first and the second geofence on a same individual.2. The apparatus of wherein the activity level for an individual comprises a distance traveled in a predetermined time period.3. The apparatus of wherein the geofence for each of the plurality of individuals is centered on a same person.4. The apparatus of wherein the logic circuitry is configured to send an alert to the same person if any individual strays outside their geofence area.5. The apparatus of wherein the logic circuitry determines the geofence for each of the plurality of individuals by determining a first geofence encompassing a first area for a first individual claim 1 , and determining a second geofence encompassing a second area for a second individual.6. (canceled)7. The apparatus of ...

Подробнее
13-01-2022 дата публикации

SYSTEM AND METHOD FOR NAVIGATING USER INTERFACES USING A HYBRID TOUCHLESS CONTROL MECHANISM

Номер: US20220007816A1
Автор: CHENG Chou, Wu Chieh-Chung
Принадлежит:

A computing device captures a live video of a user, determines a location of a facial region of the user by a facial region analyzer, and determines a finger vector type by a finger vector detector based on a direction in which at least one finger is pointing relative to the facial region of the user. Responsive to detecting a first finger vector type within the facial region involving a single finger, a makeup effects toolbar is displayed in the user interface. Responsive to detecting a second finger vector type involving the single finger, a selection tool for selecting a makeup effect in the makeup effects toolbar is displayed. The computing device obtains a makeup effect based on manipulation by the user of the selection tool. Responsive to detecting a target user action, virtual application of the selected makeup effect is performed on the facial region of the user. 1. A method implemented in a computing device for navigating a user interface using a hybrid touchless control mechanism , comprising:capturing, by a camera, a live video of a user;determining a location of a facial region of the user;determining a location of the user's hand and determining a finger vector type based on a direction in which at least one finger is pointing relative to the facial region of the user;responsive to detecting a first finger vector type within the facial region involving a single finger, displaying a makeup effects toolbar in the user interface;responsive to detecting a second finger vector type involving the single finger, displaying a selection tool for selecting a makeup effect in the makeup effects toolbar;obtaining a selected makeup effect based on manipulation by the user of the selection tool; andresponsive to detecting a target user action, performing virtual application of the selected makeup effect on the facial region of the user.2. The method of claim 1 , wherein the first finger vector type within the facial region involving the single finger comprises an ...

Подробнее
05-01-2017 дата публикации

APPARATUS, SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR RECOGNIZING FACE

Номер: US20170004355A1
Автор: FAN Haoqiang
Принадлежит:

There is disclosed an apparatus, system, method and computer program product for recognizing a face, the method comprising: emitting at least one group of structured light to a face to be recognized, successively; capturing a set of light-source-illuminated images of the face when the face is illuminated successively by each group of light of the at least one group of structured light; extracting a first set of features including a feature of each detection point in a set of detection points of the face based on the set of light-source-illuminated images; acquiring a second set of features including a feature of each detected point in a set of detected points of a face template; computing a similarity between the face and the face template based on the first set of features and the second set of features; and recognizing the face as being consistent with the face template if the similarity is larger than a threshold. 1. A system for recognizing a face , comprising:a light source generation module operative to emit at least one group of structured light to a face to be recognized, successively;an image capture module operative to capture a set of light-source-illuminated images of the face when the face is illuminated successively by each group of light of the at least one group of structured light;a processor;a memory; and extracting a first set of features including a feature of each detection point in a set of detection points of the face based on the set of light-source-illuminated images, the feature of each detection point comprising at least one of position information indicating three dimensional relative coordinates of the detection point, surface information indicating a relative surface normal of the detection point, and material information indicating a light absorption characteristic of the detection point;', 'acquiring a second set of features including a feature of each detected point in a set of detected points of a face template;', 'computing a ...

Подробнее
05-01-2017 дата публикации

OBJECT RECOGNITION APPARATUS AND CONTROL METHOD THEREFOR

Номер: US20170004369A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An object recognition apparatus is disclosed. The present apparatus includes a storage unit for obtaining an initial image of a preset object and storing the initial image as a reference image; and a control unit for obtaining a first additional image of the preset object, determining whether the size of the first additional image relative to the initial image meets a first preset condition and additionally storing the first additional image as a reference image if the first additional image meets the preset condition. 1. A control method of an object recognition apparatus , comprising:obtaining an initial image regarding a predetermined object and storing as a reference image;obtaining a first additional image regarding the predetermined object, and determining whether a size of an object included in the first additional image relative to an object included in the initial image satisfies a first predetermined condition or not; andin response to the first additional image satisfying the predetermined condition, adding and storing the first additional image as the reference image.2. The control method of claim 1 , wherein the storing as the reference image comprises storing at least two images which are generated by applying different histogram stretching to the initial image as the reference image with the initial image.3. The control method of claim 1 , wherein the adding and storing as the reference image comprises claim 1 , in response to the first additional image satisfying the predetermined condition claim 1 , storing at least two images which are generated by applying different histogram stretching to the first additional image as the reference image with the first additional image.4. The control method of claim 1 , wherein the determining whether the first predetermined condition is satisfied or not comprises claim 1 , in response to the size of the object included in the first additional image relative to the object included in the initial image falling ...

Подробнее
05-01-2017 дата публикации

METHOD AND SYSTEM FOR RECOGNIZING FACES

Номер: US20170004387A1
Принадлежит:

A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images. 1. A method for recognizing faces , comprising:retrieving a pair of face images;segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches;determining a first similarity of each pair of patches;determining, from all pair of patches, a second similarity of the pair of face images; andfusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.2. The method according to claim 1 , wherein the step of determining a first similarity of each pair of patches comprises:obtaining each of the pair of patches and K adjacent patches surrounding the obtained patches, where K is integer and more than 1;forming a first KNN from the obtained patches; anddetermining the first similarity of each pair of patches in the formed first KNN.3. The method according to claim 1 , wherein the first similarity is determined by performing random walks in the formed first KNN.4. The method according to claim 1 , wherein the step of determining claim 1 , from all pair of patches claim 1 , a second similarity comprises:obtaining each of pair of patches from the plurality of image patches;obtaining a plurality of adjacent patches surrounding the obtained patches in a second KNN;retrieving, from the second KNN, sub networks for the ...

Подробнее
04-01-2018 дата публикации

METHOD AND APPARATUS FOR RECOMMENDING AN INTERFACE THEME

Номер: US20180004365A1
Автор: Ding Meng, FU Haojing, Zhou Lei
Принадлежит:

A method, and an apparatus for recommending an interface theme are provided. An exemplary embodiment of the method includes: obtaining a target image which includes an image of a target person; obtaining characteristic information of the target person based on the target image; obtaining a selection list of recommended themes, wherein the recommended themes are interface themes that match the characteristic information of the target person; and outputting the selection list of recommended themes. 1. A method for recommending an interface theme , the method comprising:obtaining a target image which comprises an image of a target person;obtaining characteristic information of the target person based on the target image;obtaining a selection list of recommended themes, wherein the recommended themes are interface themes that match the characteristic information of the target person; andoutputting the selection list of recommended themes.2. The method of claim 1 , wherein obtaining a target image comprises:providing one or more image input interfaces for a user to input an image via one of the image input interfaces, wherein each of the image input interfaces corresponds to an input mode; andobtaining the image inputted by the user as the target image.3. The method of claim 2 , wherein the one or more image input interfaces comprise a first image input interface corresponding to a first input mode claim 2 , and wherein the first input mode is a mode to input a currently taken image.4. The method of claim 2 , wherein the one or more image input interfaces comprise a second image input interface corresponding to a second input mode claim 2 , and wherein the second input mode is a mode to input an image selected from a local album.5. The method of claim 4 , wherein obtaining the image inputted by the user as the target image comprises:obtaining, from the local album, images comprising an image of a person as candidate images, when the user selects the second image input ...

Подробнее
07-01-2016 дата публикации

Visual Search Engine

Номер: US20160004789A1
Автор: Algreatly Cherif
Принадлежит:

A method for sorting and searching images is disclosed. The method is utilized in various augmented reality applications to retrieve information related to the objects which appear in a picture taken by a camera. The objects can be human faces, text, 3D models or the like. The method can be used with mobile phones, tablets, or optical head mounted displays to serve numerous educational, gaming and commercial purposes. 1. A visual search method of a text image comprising:marking the text image with successive strips each of which starts and ends at the start and end of a text line of the text image;creating a set of numerals representing the lengths of the successive strips; andcomparing the set of numerals against a database that associates each unique set of numerals with related information and an identifier representing the text source.2. The visual search method of wherein each strip of the successive strips starts and ends at the start and end of a text word of the text image.3. The visual search method of wherein each strip of the successive strips is a polygon that covers the boundary lines of a paragraph of the text image.4. The visual search method of further the text image includes pictures and a plurality of the successive strips start and end at the sides of the pictures.5. The visual search method of wherein the related information is digital data such as text claim 1 , pictures claim 1 , videos claim 1 , or documents.6. The visual search method of wherein the text source is a book claim 1 , magazine claim 1 , newspaper claim 1 , or Web page.7. The visual search method of wherein the text source is a box of a product and the additional information is related to the product.8. The visual search method of wherein the text source is a street advertisement and the additional information is related to content claim 1 , product or service of the street advertisement.9. The visual search method of wherein the text source is a computer application.10. The ...

Подробнее
05-01-2017 дата публикации

SYSTEMS AND METHODS FOR MEDIA PRIVACY

Номер: US20170004602A1
Автор: Le Jouan Hervé
Принадлежит: Privowny, Inc.

A system comprises a picture and metadata captured by a content capture system; a recognizable characteristic datastore configured to store recognizable characteristics of different users; a module configured to identify a time and a location associated with the picture based on the metadata, and to identify one or more potential target systems within a predetermined range of the location at the time; a characteristic recognition module configured to retrieve the recognizable characteristics of one or more potential users associated with the potential target systems, and evaluate whether the picture includes one or more representations of at least one actual target user from the potential users based on the recognizable characteristics of the potential users; a distortion module configured to distort a feature of the representations of the least one actual target user in response to the determination; a communication module configured to communicate the distorted picture to a computer network. 1. A system , comprising:a picture and associated metadata captured by a content capture system;a recognizable characteristic datastore configured to store recognizable characteristics of different users;a module configured to identify a time and a location associated with the picture based on the associated metadata, and to identify a set of one or more potential target systems within a predetermined range of the location at the time; retrieve the recognizable characteristics of a set of one or more potential users associated with the set of one or more potential target systems, and', 'evaluate whether the picture includes one or more representations of at least one actual target user from the set of one or more potential users based on the recognizable characteristics of the set of one or more potential users;, 'a characteristic recognition module configured toa distortion module configured to distort a feature of each of the one or more representations of the least one ...

Подробнее
07-01-2016 дата публикации

METHOD AND SYSTEM FOR AUTHENTICATING USER IDENTITY AND DETECTING FRAUDULENT CONTENT ASSOCIATED WITH ONLINE ACTIVITIES

Номер: US20160005050A1
Автор: Teman Ari
Принадлежит:

A method and system for authenticating a user's identity, studying user state and reaction, and detecting fraudulent user content associated with online activities. The method and system receives user content which may include video images, and processes the user content using facial recognition algorithms and analyzing various parameters to uniquely identify a user and a potentially fraudulent online posting, activity or profile. The method and system initiates a number of actions based on a determination that the user or posting, activity or profile is potentially fraudulent. 1. A method for determining fraudulent content online , the method comprising:receiving, by a computer system, user content;processing, by a processing device, the user content to determine a likelihood that the user content is presented fraudulently; andinitiating one or more actions based on a determination the user content is relatively likely to be presented fraudulently.2. The method of claim 1 , wherein the user content is a referenced image.3. The method of claim 2 , wherein the step of processing user content to determine a likelihood that the user content is presented fraudulently includes the steps of:searching an image database to identify incidences of a referenced image; andmatching incidences of a referenced image with identical or similar images within said image database.4. The method of wherein searching the image database includes searching embedded metadata associated with particular images stored within said image database.5. The method of claim 1 , further comprising the steps of:identifying one or more fields within said user content;employing the processing device to analyze and assign a first fraud score for each identified field within said user content;initiating one or more actions based on a determination that one or more first fraud scores exceeds a maximum allowable first fraud score;employing the processing device to determine an aggregate fraud score of the ...

Подробнее
07-01-2016 дата публикации

SENSITIVITY EVALUATION SYSTEM

Номер: US20160005058A1
Принадлежит: Hitachi, Ltd.

Since questionnaires are used according to the sensitivity marketing method in the related art, it is not possible to obtain answers from many people at the same time, and also, analysis of the answers requires a long time, which does not match life cycles of product development. Furthermore, there is a problem that credibility of the aggregate result is low without the use of biomarkers. A sensitivity evaluation system for evaluating distributed video information has a client system that transmits an evaluation request related to the video information to an information analysis system and transmits the video information, which is to be viewed by an examinee at a terminal. The terminal transmits a biosignal, which is measured by an apparatus that the examinee wears to the information analysis system which analyzes the biosignal, creates a report related to the video information, and transmits the created report to the client system. 1. A sensitivity evaluation system comprising:a client system;an information analysis system;an information collecting apparatus;a measurement apparatus; anda terminal,wherein the client system transmits an evaluation request related to video information to the information analysis system and transmits the video information, which is to be viewed by an examinee, to a receiver or the terminal by which each examinee views the video information,wherein the terminal transmits a biosignal, which is measured by the measurement apparatus that the examinee wears while viewing the video information, to the information analysis system via the information collecting apparatus, andwherein the information analysis system analyzes the biosignal, creates a report related to the video information based on a result of the analysis, and transmits the created report to the client system.2. The sensitivity evaluation system according to claim 1 ,wherein the terminal displays an image of a face of the examinee and a wearing position, at which the measurement ...

Подробнее
07-01-2016 дата публикации

METHODS OF NON-TOUCH OPTICAL DETECTION OF VITAL SIGNS FROM MULTIPLE FILTERS

Номер: US20160005165A1
Принадлежит: Arc Devices, LTD

A microprocessor is operably coupled to a camera from which patient vital signs are determined. A temporal variation of images from the camera is generated from multiple filters and then amplified from which the patient vital sign, such as heart rate or respiratory rate, can be determined and then displayed or stored. 1. A method of displaying a biological vital sign , the method comprising:identifying pixel values of at least two images in a first location in a memory that are representative of skin of an animal, resulting in identified pixel values that are stored in a second location the memory;applying a first frequency filter to the identified pixel values that are stored in the second location of the memory, generating frequency filtered identified pixel values of the skin that are stored in a third location of the memory;applying spatial clustering to the frequency filtered identified pixel values of the skin that are stored in the third location of the memory, yielding spatial clustered frequency filtered identified pixel values of the skin that are stored in a fourth location of the memory;applying a second frequency filter to the spatial clustered frequency filtered identified pixel values of skin that are stored in the fourth location of the memory, yielding a temporal variation that is stored in a fifth location in the memory;generating the biological vital sign in a sixth location of the memory from the temporal variation; anddisplaying the biological vital sign from sixth location of the memory.2. The method of claim 1 , wherein the first frequency filter further comprises: a high pass filter.3. The method of claim 1 , wherein the biological vital sign further comprises: a pattern of blood flow.4. The method of claim 3 , wherein generating the pattern of blood flow from the temporal variation further comprises:generating the pattern flow of blood from motion changes in the pixels and color changes of the temporal variation in the skin.5. The method of ...

Подробнее
07-01-2016 дата публикации

Image Analysis Device, Image Analysis System, and Image Analysis Method

Номер: US20160005171A1
Принадлежит: Hitachi, Ltd.

An image analysis device according to the present invention includes a storage unit storing an image and information of a detected object included in the image, an input unit receiving a target image serving as a target in which an object is detected, a similar image search unit searching for a similar image having a feature quantity similar to a feature quantity extracted from the target image and the information of the object included in the similar image from the storage unit, a parameter deciding unit deciding a parameter used in a detection process performed on the target image based on the information of the object included in the similar image, a detecting unit detecting an object from the target image according to the decided parameter, a registering unit accumulating the target image in the storage unit, and a data output unit outputting the information of the detected object. 1. An image analysis device , comprising:an image storage unit that stores an image and information of a detected object included in the image;an image input unit that receives a target image serving as a target in which an object is detected;a similar image search unit that searches for a similar image having a feature quantity similar to a feature quantity extracted from the target image and the information of the detected object included in the similar image from the image storage unit;a parameter deciding unit that decides a parameter used in a detection process performed on the target image based on the information of the detected object included in the similar image;a detecting unit that detects an object from the target image according to the decided parameter;an image registering unit that accumulates the detected object and the target image in the image storage unit; anda data output unit that outputs the information of the detected object.2. The image analysis device according to claim 1 ,wherein the information stored in the image storage unit includes a feature quantity ...

Подробнее
04-01-2018 дата публикации

SYSTEM, APPARATUS, METHOD, PROGRAM AND RECORDING MEDIUM FOR PROCESSING IMAGE

Номер: US20180004773A1
Принадлежит: SONY CORPORATION

An image processing system may include an imaging device for capturing an image and an image processing apparatus for processing the image. The imaging device may include an imaging unit for capturing the image, a first recording unit for recording information relating to the image, the information being associated with the image, and a first transmission control unit for controlling transmission of the image to the image processing apparatus. The image processing apparatus may include a reception control unit for controlling reception of the image transmitted from the imaging device, a feature extracting unit for extracting a feature of the received image, a second recording unit for recording the feature, extracted from the image, the feature being associated with the image, and a second transmission control unit for controlling transmission of the feature to the imaging device. 1. (canceled)2. An information processing system comprising:a first information processing apparatus and a second information processing apparatus; capturing an image by an imaging device, and', 'transmitting the image to the second information processing apparatus; and, 'wherein the first information processing apparatus includes at least one first processor configured to control extracting a feature of the image by image analysis;', 'generating metadata including feature information based on the feature extracted from the image,, 'wherein the second information processing apparatus includes at least one second processor configured to control 'transmitting, to a device different from the first and second information processing apparatuses, information related to the metadata,', 'associating the metadata with the image; and'}wherein the transmitting is for controlling searching images, on the device, based on the information related to the metadata and displaying a result of the searching.3. The information processing system of claim 2 ,wherein the at least one first processor or the at ...

Подробнее
07-01-2021 дата публикации

SYSTEM AND METHOD FOR SECURE FIVE-DIMENSIONAL USER IDENTIFICATION

Номер: US20210004446A1
Автор: Kikinis Dan
Принадлежит:

A method for secure user identification is disclosed, comprising the steps of: creating a first user identification; uniquely associating the user identification with the user; recording, using the identification device, an unknown user's head from a range of positions and using illumination in different wavelengths; retrieving a second user identification; and comparing, using the identification device, the second user identification against the recording of the unknown user's head and a plurality of measured movements of the unknown user's head and hand to identify the unknown user. 1. A method for secure user identification , comprising: ["a video recording of a user's head, recorded from a range of positions and using illumination in different wavelengths;", "a point cloud model of the user's head, based on at least a portion of the video recording;", "a three-dimensional mesh model of the user's head, based on at least a portion of the video recording;", 'a first motion signature comprising a plurality of head movements measured during the creation of the video recording, the first motion signature being uniquely identifiable to the user; and', 'a second motion signature comprising a plurality of hand movements measured during the creation of the video recording, the second motion signature being uniquely identifiable to the user;, 'creating a first user identification, using an identification device comprising at least a processor, a memory, and a plurality of programming instructions stored in the memory and operating on the processor, the identification comprisinguniquely associating the user identification with the user;recording, using the identification device, an unknown user's head from a range of positions and using illumination in different wavelengths;retrieving a second user identification; andcomparing, using the identification device, the second user identification against the recording of the unknown user's head and a plurality of measured ...

Подробнее
02-01-2020 дата публикации

HEALTH STATISTICS AND COMMUNICATIONS OF ASSOCIATED VEHICLE USERS

Номер: US20200004791A1
Автор: Ricci Christopher P.
Принадлежит:

Methods and systems for a complete vehicle ecosystem are provided. Specifically, systems that when taken alone, or together, provide an individual or group of individuals with an intuitive and comfortable vehicular environment. The present disclosure includes a system that provides various outputs based on a user profile and determined context. An output provided by the present disclosure can change a configuration of a vehicle, device, building, and/or a system associated with the user profile. The configurations can include comfort and interface settings that can be adjusted based on the user profile information. Further, the user profiles can track health data related to the user and make adjustments to the configuration to assist the health of the user. 1. A method , comprising:detecting a presence of at least one user in a vehicle;determining an identity of the at least one user;receiving data associated with the at least one user, wherein the data includes biometric information;detecting a deviation between the received data and an established baseline biometric profile associated with the at least one user; anddetermining, based at least partially on the detected deviation, to provide an output configured to address the deviation.2. The method of claim 1 , wherein prior to receiving data associated with the at least one user the method further comprises:determining the baseline biometric profile associated with the at least one user; andstoring the determined baseline biometric profile in a user profile memory associated with the at least one user.3. The method of claim 1 , wherein determining the presence of the at least one user inside the vehicle further comprises:detecting a person via at least one image sensor associated with the vehicle.4. The method of claim 3 , wherein determining the identity of the at least one user further comprises:identifying facial features associated with the person detected via the at least one image sensor; anddetermining ...

Подробнее