Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6897. Отображено 199.
13-09-2018 дата публикации

СПОСОБ И УСТРОЙСТВО КАТЕГОРИЗАЦИИ ВИДЕО

Номер: RU2667027C2
Принадлежит: СЯОМИ ИНК. (CN)

Изобретение относится к средствам категоризации видео. Технический результат заключается в улучшении точности категоризации видео. Получают ключевой кадр, содержащий лицо на видео. Получают черты лица в ключевом кадре. Получают одно или нескользкого черт лица, соответствующих одной или нескольким категориям изображений; в соответствии с чертой лица в ключевом кадре и одной или несколькими чертами лица, соответствующими одной или нескольким категориям изображений. Определяют категорию изображений, к которой принадлежит данное видео. Присваивают видео категории изображения, к которой принадлежит видео. 3 н. и 8 з.п. ф-лы, 9 ил.

Подробнее
07-06-2018 дата публикации

СПОСОБ И УСТРОЙСТВО ДЛЯ РЕКОМЕНДАЦИИ ОБЛАЧНОЙ КАРТЫ

Номер: RU2656978C2
Принадлежит: Сяоми Инк. (CN)

Изобретение относится к области компьютерных технологий. Технический результат – повышение точности при рекомендации облачной карты контактному субъекту. Способ для рекомендации облачной карты содержит этапы, на которых: получают облачную карту первого контактного субъекта и контактную информацию, сохраненную в терминале второго контактного субъекта, причем облачная карта содержит фотографию и контактная информация содержит контактную фотографию; сравнивают фотографию из облачной карты с контактной фотографией, при этом сравнение фотографии из облачной карты с контактной фотографией содержит этапы, на которых: вычисляют первое подобие между фотографиями; определяют то, что первое подобие достигает предварительно определенного подобия; если первое подобие достигает предварительно определенного подобия, определяют число вхождений фотографии из облачной карты или контактной фотографии в фотографиях из облачной карты и контактных фотографиях, сохраненных на сервере; и определяют то, что фотография ...

Подробнее
26-07-2018 дата публикации

Номер: RU2017102520A3
Автор:
Принадлежит:

Подробнее
02-10-2019 дата публикации

СИСТЕМА И СПОСОБ ПОИСКА ОБЪЕКТОВ ПО ТРАЕКТОРИЯМ ДВИЖЕНИЯ НА ПЛАНЕ МЕСТНОСТИ

Номер: RU2701985C1

Изобретение относится к области систем поиска и наблюдения. Технический результат заключается в расширении арсенала средств. Компьютерная система поиска объектов по траекториям на плане местности содержит устройство обработки данных, множество различных устройств захвата данных, память и графический пользовательский интерфейс, который включает средства ввода и вывода данных, которые, в свою очередь, включают блок задания графических примитивов, блок задания характеристик поиска и блок поиска. Упомянутые средства вывода данных содержат блок отображения, выполненный с возможностью отображения результатов поиска. Способ поиска объектов по траекториям на плане местности содержит этапы, на которых осуществляют сбор и предоставление данных, задают графический примитив на плане местности посредством выбора нескольких точек в системе координат плана местности, задают характеристики поиска, осуществляют поиск объектов по траекториям движения объектов и отображают результаты поиска на устройстве ...

Подробнее
20-05-2016 дата публикации

УСТРОЙСТВО УПРАВЛЕНИЯ ОТОБРАЖЕНИЕМ И СПОСОБ УПРАВЛЕНИЯ ОТОБРАЖЕНИЕМ

Номер: RU2014143020A
Принадлежит:

... 1. Контроллер устройства отображения, содержащий:схему, выполненную с возможностью вызова отображения устройством отображения, информации автопортретного фотографирования в ответ на прием указания, что устройство отображения и блок формирования изображения имеют заданное позиционное соотношение.2. Контроллер устройства отображения по п. 1, в которомупомянутое заданное позиционное соотношение соответствует направлению элемента формирования изображения, блока формирования изображения, по существу, в ту же сторону, что и устройство отображения, так, что оператор блока формирования изображения располагается непосредственно перед устройством отображения, являясь фотографическим объектом элемента формирования изображения.3. Контроллер устройства отображения по п. 1, в котороминформация автопортретного фотографирования включает в себя информацию, относящуюся к автоматической записи.4. Контроллер устройства отображения по п. 3, в котороминформация автопортретного фотографирования включает в себя ...

Подробнее
31-01-2002 дата публикации

Merkmallokalisierung in einem Bild

Номер: DE0069612700T2

Подробнее
28-06-1989 дата публикации

METHODS AND APPARATUS FOR OBTAINING INFORMATION FOR CHARACTERISING A PERSON OR ANIMAL

Номер: GB0008910749D0
Автор:
Принадлежит:

Подробнее
01-10-2008 дата публикации

A method and apparatus for extracting face images from video data and performing recognition matching for identification of people.

Номер: GB0002448050A
Принадлежит:

A face region detecting apparatus and method comprising a face detection and recognition unit that (a) extracts a face image from an inputted image, (b) encodes the face image into a "feature code" and (c) stores the feature code in a database; and another unit/piece of apparatus (a face detection server in claim 1 or a face recognition module in claim 9) which (d) compares the face feature code with previously stored face feature codes, (e) determines whether the features of the codes match and (f) transmits a recognition result to the face detection and recognition unit. The inputted image may be from a digital video recorder, particularly one which records in infrared (IR) and has an infrared iris filter. The apparatus may comprise an light emitting diode (LED) emitting infrared light to illuminate subject people. The extraction of the face image may comprise searching and measuring information on a front face in real time, for example by: extracting a face screen from the video data ...

Подробнее
07-04-2021 дата публикации

Facial localisation in images

Номер: GB2582833B

Подробнее
16-08-2017 дата публикации

Detection of manipulated images

Номер: GB0201710560D0
Автор:
Принадлежит:

Подробнее
27-06-1990 дата публикации

METHODS AND APPARATUS FOR OBTAINING INFORMATION FOR CHARACTERISING A PERSON OR ANIMAL

Номер: GB0009010358D0
Автор:
Принадлежит:

Подробнее
26-08-2020 дата публикации

Human hair style generation method based on multi-feature search and deformation

Номер: GB0002581758A
Принадлежит:

A human hairstyle generation method based on multi-feature retrieval and deformation, comprising: obtaining a hairstyle mask; recognizing feature points of a face and matching same with feature points in a hairstyle database; aligning an image with a standard face to obtain a corresponding hair area; calculating a minkowski distance of hair masks of all frontal faces in the hair region and hairstyle database; providing corresponding weight after sorting from small to large; training a deep learning network to detect the hairstyles of hair basic blocks at different scales; and taking out the most similar picture of hair. The present invention utilizes a single frontal face photo to retrieve a three-dimensional hair model which is most similar to the photo in a large three-dimensional hairstyle database by means of a retrieval database, thereby avoiding manual modeling, improving efficiency and ensuring high fidelity.

Подробнее
01-08-2018 дата публикации

Method of and system for recognising a human face

Номер: GB0201809857D0
Автор:
Принадлежит:

Подробнее
28-09-2016 дата публикации

Methods of generating personalized 3d head models or 3d body models

Номер: GB0201613959D0
Автор:
Принадлежит:

Подробнее
12-08-2021 дата публикации

Method of Host-Directed Illumination and System for Conducting Host-Directed Illumination

Номер: AU2021206815A1
Принадлежит:

A method for detecting user liveness comprising: transmitting, using an authentication computer system, via a network an illumination instruction to a computing device, the illumination instruction comprising a plurality of illumination instructions that each cause the applied illumination to be for a different wavelength; capturing, by the computing device, facial image data of a user as a sequence of discrete images, each discrete image being captured while illumination is applied to the face of the user in accordance with the illumination instruction; transmitting over the network, by the computing device, the sequence of discrete images to the authentication computer system; recognizing, by the authentication computer system, reflections in a plurality of the images, the reflections resulting from the applied illumination; and determining, by the authentication computer system, the facial image data was taken of a live person based on the reflections.

Подробнее
13-08-2013 дата публикации

INTERACTIVE SYSTEM FOR RECOGNITION ANALYSIS OF MULTIPLE STREAMS OF VIDEO

Номер: CA0002559381C
Принадлежит: 3VR SECURITY, INC.

... ²²²A method of identifying an object captured in a video image in a multi-camera ²video surveillance system is disclosed. Sets of identifying information are ²stored in profiles, each profile being associated with one object. The ²disclosed method of identifying an object includes comparing identifying ²information extracted from images captured by the video surveillance system to ²one or more stored profiles. A confidence score is calculated for each ²comparison and used to determine a best match between the extracted set of ²identifying information and an object. In one embodiment, the method is used ²as part of a facial recognition system incorporated into a video surveillance ²system.² ...

Подробнее
14-12-1976 дата публикации

METHOD FOR IDENTIFYING INDIVIDUALS USING SELECTED CHARACTERISTIC BODY CURVES

Номер: CA1001761A
Автор:
Принадлежит:

Подробнее
28-09-2017 дата публикации

SYSTEMS AND METHODS FOR PROVIDING CUSTOMIZED PRODUCT RECOMMENDATIONS

Номер: CA0003015492A1
Принадлежит:

Systems and methods for providing customized skin care product recommendations. The system utilizes an image capture device and a computing device coupled to the image capture device. The computing device causes the system to analyze a captured image of a user via the by processing the image through a convolutional neural network to determine a skin age of the user. Determining the skin age may include identifying at least one pixel that is indicative of the skin age and utilizing the at least one pixel to create a heat map that identifies a region of the image that contributes to the skin age. The system may be used to determine a target skin age of the user, determine a skin care product for achieving the target skin age, and provide an option for the user to purchase the product.

Подробнее
02-07-2020 дата публикации

AN OPERATION DETERMINATION METHOD BASED ON EXPRESSION GROUPS, APPARATUS AND ELECTRONIC DEVICE THEREFOR

Номер: CA3125055A1
Принадлежит:

A method and device for determining an operation based on facial expression groups, and an electronic device, related to the technical field of image processing. The method is executed by the electronic device. The method comprises: acquiring a current facial image of a target subject (S102); performing live body facial recognition with respect to the target subject on the basis of the current facial image, determining whether the identity of the target subject is valid on the basis of the recognition result (S104), the live body facial recognition comprising live body recognition and facial recognition; if valid, acquiring a current facial expression group of the current facial image (S106); determining an instruction to be executed corresponding to the current facial expression group (S108); and executing an operation corresponding to said instruction (S110). With the employment of a facial recognition technique, at the same time as an identity authentication function of facial recognition ...

Подробнее
13-03-2019 дата публикации

SYSTEMS AND METHODS FOR IDENTIFYING DRUNK REQUESTERS IN AN ONLINE TO OFFLINE SERVICE PLATFORM

Номер: CA0003028639A1
Принадлежит: BORDEN LADNER GERVAIS LLP

A method for detecting drunk requesters in an O2O service platform is provided. The method may include obtaining information related to a request of an O2O service initiated by a requester. The method may also include determining a probability that the requester has consumed alcohol using an alcohol consumption prediction model based on the information related to the request, and determining whether the probability is greater than a threshold. In response to a determination that the probability is greater than the threshold, the method may further include obtaining information related to the requester, and determining whether the requester has consumed alcohol based on the information related to the requester. In response to a determination that the requester has consumed alcohol, the method may further include transmitting a notification that the requester has consumed alcohol to a provider terminal corresponding to the request of the O2O service.

Подробнее
07-05-2015 дата публикации

SYSTEMS AND METHODS FOR FACIAL REPRESENTATION

Номер: CA0002928932A1
Принадлежит:

Systems, methods, and non-transitory computer readable media can align face images, classify face images, and verify face images by employing a deep neural network (DNN). A 3D-aligned face image can be generated from a 2D face image. An identity of the 2D face image can be classified based on provision of the 3D-aligned face image to the DNN. The identity of the 2D face image can comprise a feature vector.

Подробнее
08-10-2015 дата публикации

AUTOMATED SELECTIVE UPLOAD OF IMAGES

Номер: CA0002943237A1
Автор: SPAITH, JOHN, SPAITH JOHN
Принадлежит:

Methods, systems, and computer program products are provided that determine the merit of a given captured image, and apply an intelligent policy to the uploading of the image. An image may be captured by an image capturing device of a user. A merit score is determined for the captured image. The merit score indicates a predicted value of the captured image to the user. An access policy is assigned to the captured image based on the determined merit score. Access to the captured image is enabled based on the assigned access policy. For instance, the captured image may be deleted, may be automatically uploaded to a server over a fee-free network connection only, may be uploaded to the server over any available network connection, may be uploaded at a reduced image resolution, and/or may be uploaded at full image resolution, depending on the access policy.

Подробнее
27-12-1996 дата публикации

Locating Features in an Image

Номер: CA0002177639A1
Принадлежит:

Подробнее
25-11-1999 дата публикации

IMAGE PROCESSING APPARATUS AND METHOD, AND PROVIDING MEDIUM

Номер: CA0002295606A1
Автор: OHBA, AKIO, OHBA AKIO
Принадлежит:

A whole image (51) of a low resolution is captured from an inputted image so as to accurately track a predetermined portion, an image (52) of an intermediate resolution mainly containing the head is extracted from the whole image (51), and further, after the extraction an image (53) of a high resolution mainly containing the eyes is extracted. A predetermined image different from the inputted image is changed and displayed according to the extracted image (53).

Подробнее
15-04-1975 дата публикации

Номер: CH0000560537A5
Автор:
Принадлежит: ROTHFJELL ROLF ERIC, ROTHFJELL, ROLF ERIC

Подробнее
15-05-2020 дата публикации

The invention relates to a method for facial authentication of a wearer of a watch.

Номер: CH0000715529A2
Принадлежит:

L'invention concerne un procédé d'authentification faciale d'un porteur d'une montre comprenant les étapes suivant : initiation (10) du processus d'authentification comprenant une sous-étape de détection (11) d'au moins un mouvement/geste déclencheur effectué par le porteur ; capture (13) d'au moins une séquence d'images relative au visage du porteur pivotant d'une direction à une autre devant le capteur optique ; acquisition (17) de données géométriques de surface du visage associées à chaque image de ladite au moins une séquence ; génération (18) d'un modèle tridimensionnel du visage du porteur à partir de ladite au moins une séquence d'images capturée et desdites données géométriques acquises ; détermination (19) d'un indice d'identification généré à partir de données d'identification relatives à une pluralité de traits caractéristiques du visage du porteur de la montre détectées à partir du modèle tridimensionnel, et identification (21) du porteur si l'indice d'identification est supérieur ...

Подробнее
01-01-2019 дата публикации

Method and apparatus for identifying object to be detected offline

Номер: CN0109117741A
Принадлежит:

Подробнее
06-11-2018 дата публикации

Shared automobile unlocking method and system based on face recognition technology

Номер: CN0108764179A
Автор: WANG XIANG, HAN TING
Принадлежит:

Подробнее
26-04-2019 дата публикации

A shoulder feature and sitting posture behavior identification method

Номер: CN0109685025A
Автор: LIU MIN, ZHU ZEDE, XU XIANG
Принадлежит:

Подробнее
26-04-2019 дата публикации

Payment method and device, terminal and storage medium

Номер: CN0109685962A
Автор: YANG ZHIGANG, LI SHUNBO
Принадлежит:

Подробнее
15-02-2019 дата публикации

Witness verification equipment and witness verification method

Номер: CN0109345677A
Принадлежит:

Подробнее
31-07-2018 дата публикации

Photo album and address book mutual information correlation method and terminal

Номер: CN0108345680A
Автор: XIANG YUEYUN
Принадлежит:

Подробнее
09-04-2019 дата публикации

A method and device for obtaining a target person based on a video

Номер: CN0109598223A
Принадлежит:

Подробнее
22-01-2019 дата публикации

Method, device, storage medium and terminal device for determining head posture

Номер: CN0109255329A
Принадлежит:

Подробнее
19-02-2019 дата публикации

Class sign-in method with multi-modal authentication

Номер: CN0109360283A
Принадлежит:

Подробнее
29-05-2018 дата публикации

Human face detection method and device

Номер: CN0108090464A
Автор: CHENG FUYUN
Принадлежит:

Подробнее
04-01-2019 дата публикации

Human-computer interaction method and device for multi-person gesture based on kinect

Номер: CN0109145802A
Принадлежит:

Подробнее
20-07-2016 дата публикации

Method and device for sending image

Номер: CN0105791325A
Принадлежит:

Подробнее
25-01-2019 дата публикации

Face eigenvalue extraction method, device, computer device and storage medium

Номер: CN0109271869A
Автор: CHEN LIN
Принадлежит:

Подробнее
16-04-2019 дата публикации

Driver anti-interference system

Номер: CN0109635771A
Принадлежит:

Подробнее
16-03-2018 дата публикации

Image segmentation method, device and apparatus

Номер: CN0104156947B
Автор:
Принадлежит:

Подробнее
29-06-2018 дата публикации

Image processing method, device and computer-readable storage medium

Номер: CN0108229389A
Автор:
Принадлежит:

Подробнее
01-12-2017 дата публикации

Image correlation method and device and electronic equipment

Номер: CN0107423441A
Автор: LING YINGYING
Принадлежит:

The invention relates to the technical field of terminals, in particular to an image correlation method and device and electronic equipment. The image correlation method is applied to the electronic equipment and includes: according to a facial expression recognition algorithm, determining emotional categories of people in images, wherein each emotional category corresponds to each label; according to the determined emotional categories of the people in the images, traversing a label library to default labels corresponding to the emotional categories; establishing correlation between the images and the found default labels. Therefore, the emotional categories of the people in the images can be analyzed in a more fine grained manner, and the corresponding labels can be configured for the emotional categories.

Подробнее
27-02-2015 дата публикации

DECISION DEVICE PROVIDED FOR DECIDE WHETHER A [...] IS TRUE OR FALSE

Номер: FR0003009878A1
Принадлежит:

L'invention concerne un dispositif de décision (100) prévu pour décider si un œil (50) présentant une macula (53) et un axe optique (51) est vrai ou faux, ledit dispositif de décision (100) comportant : - des moyens d'éclairage (102) émettant vers l'œil (50) un flux infrarouge selon un axe d'entrée (106), - un moyen de capture infrarouge (104) prévu pour capturer une image de l'œil (50) selon un axe de sortie (110), l'axe d'entrée (106) et l'axe de sortie (110) étant alignés avec l'axe optique (51) tel que la macula (53) éclairée par les moyens d'éclairage (102) sous l'incidence de l'axe d'entrée (106) est vue par le moyen de capture (104), - des moyens de traitement (108) prévus pour détecter, sur une image capturée par le moyen de capture (104), si une zone périphérique représentative de l'iris (54) existe et si une zone centrale à l'intérieur de la zone périphérique et dont la couleur est représentative de l'existence de la macula (53) existe, et pour délivrer une information représentative ...

Подробнее
25-12-2015 дата публикации

METHOD FOR ASSISTING IN DETERMINING PARAMETERS OF SIGHT OF A SUBJECT

Номер: FR0002996014B1
Принадлежит: INTERACTIF VISUEL SYSTEME (I V S)

Подробнее
28-02-2018 дата публикации

영상의 깊이 정보를 이용한 성형결과 이미지 도출방법

Номер: KR0101818992B1
Автор: 권순각, 이동석
Принадлежит: 동의대학교 산학협력단

... 본 발명에 따른 깊이 얼굴 인식을 통한 성형 의료 시스템은, 안면특징 깊이정보 저장용 안면 저장부; 안면깊이영상 촬영용 깊이영상 촬영부; 깊이값 오차보정용 깊이영상 보정부; 깊이영상 안면부분 추출용 안면 검출부; 영상회전 신축 변환 및 영상촬영거리에 따른 안면영상 신축 및 안면 정렬용 깊이영상 변환부; 깊이영상 안면특징 추출용 안면특징 추출부; 상기 안면 저장부에 저장된 데이터와 비교하는 안면특징 비교부; 인물의 색상영상을 촬영하는 색상영상촬영부; 성형 후 깊이정보를 계산하는 성형결과계산부; 상기 계산결과를 3D영상으로 렌더링하는 얼굴렌더링부; 상기 얼굴렌더링부에서 렌더링된 이미지와 최초 안면이미지를 비교하여 표출하는 성형 전/후 이미지 표출부;로 구성되는 영상의 깊이 정보를 이용한 성형결과 이미지 도출방법에 관한 것이다.

Подробнее
22-03-2019 дата публикации

Номер: KR0101961462B1
Автор:
Принадлежит:

Подробнее
26-06-2018 дата публикации

영상 처리 장치 및 영상 처리 방법

Номер: KR0101870902B1
Принадлежит: 삼성전자주식회사

... 얼굴 검출 기술과 깊이 영상을 이용한 인체 영역 추출(human segmentation) 기술을 정합함으로써, 움직이는 대상 중 사람만을 정확하게 분리하고, 움직임이 없는 사람(예: 첫 번째 프레임의 영상에 존재하는 사람)도 정확하게 분리할 수 있는 영상 처리 장치 및 영상 처리 방법을 제안한다. 이를 위해 본 발명은 입력되는 컬러 영상으로부터 사람의 얼굴을 검출하는 얼굴 검출부; 입력되는 첫 번째 프레임의 깊이 영상 및 얼굴 검출부의 검출 결과를 이용하여 배경 모델을 생성하는 배경 모델 생성/갱신부; 입력되는 두 번째 이후 프레임의 깊이 영상과 배경 모델을 비교하여 인체 영역이 될 수 있는 후보 영역을 생성하고, 얼굴 검출부의 검출 결과를 이용하여 사람이 아닌 움직이는 대상이라고 판단되는 영역은 후보 영역에서 제거하여 최종적인 후보 영역을 추출하는 후보 영역 추출부; 및 이전 프레임의 깊이 영상으로부터 추출한 인체 영역을 이용하여 현재 프레임의 깊이 영상으로부터 추출한 후보 영역으로부터 인체 영역을 추출하는 인체 영역 추출부를 포함한다.

Подробнее
02-08-2016 дата публикации

Номер: KR0101644586B1
Автор:
Принадлежит:

Подробнее
01-08-2017 дата публикации

깊이 영상들에서 사람의 머리 부위 검출

Номер: KR0101763778B1
Принадлежит: 인텔 코포레이션

... 깊이 영상을 수신하는 단계 및 깊이 영상의 픽셀들에 템플릿을 적용하여 깊이 영상에서 사람의 머리 부위의 위치를 결정하는 단계를 포함하는 시스템, 디바이스, 및 방법들이 설명되어 있다. 템플릿은 원 형상 영역 및 원 형상 영역을 둘러싸는 제1 고리 형상 영역을 포함한다. 원 형상 영역은 제1 범위의 깊이 값들을 특정한다. 제1 고리 형상 영역은 제1 범위의 깊이 값들의 깊이 값들보다 큰 제2 범위의 깊이 값들을 특정한다.

Подробнее
31-10-2018 дата публикации

METHOD, PROGRAM AND APPARATUS FOR ESTIMATING LOCATION AND MANAGING IDENTITY USING FACE INFORMATION

Номер: KR1020180118447A
Принадлежит:

According to an embodiment of the present specification, a method for estimating a location and managing an identity using face information comprises the steps of: photographing an image of an object using a PTZ camera; acquiring face information including a size and a coordinate of a region of interest from the image; measuring a depth value of the image using the size of the region of interest and a zoom value of the PTZ camera; estimating a location of an object using the coordinate of the region of interest and the depth value; comparing the face information with pre-stored personal information to acquire current human information including a current name, a current recognition score, and current location information associated with the estimated location of the object; loading accumulated human information including a previous name, a previous recognition score, and previous location information of the object; updating the current human information to the accumulated human information ...

Подробнее
03-09-2018 дата публикации

얼굴 생체 검증 방법 및 장치

Номер: KR1020180098367A
Автор: 바오 관보, 리 지린
Принадлежит:

... 본 출원은 살아 있는 인간 얼굴 검증 방법 및 장치를 개시한다. 상기 장치는 적어도 2대의 카메라에 의해 캡처된 얼굴 이미지를 획득하고, 미리 설정된 얼굴 특징점에 따라 얼굴 이미지에 대해 특징점 등록을 수행하여 얼굴 이미지의 대응하는 특징점 조합을 얻는다. 특징점 조합 사이에서 호모그래피 변환 행렬을 갖춘 후에, 상기 장치는 호모그래피 변환 행렬을 사용하여 특징점 조합의 변환 에러를 계산하여 에러 계산 결과를 얻고, 에러 계산 결과에 따라 얼굴 이미지의 살아 있는 인간 얼굴 검증을 수행한다. 본 출원의 실시 예는 카메라를 캘리브레이션할 필요가 없으므로, 생체 판단 알고리즘의 계산량이 감소될 수 있으며; 또한, 카메라가 자유롭게 배치될 수 있고, 이에 따라 생체 검출의 유연성 및 편리성이 증가될 수 있다.

Подробнее
27-09-2017 дата публикации

얼굴 생체 내 검출 방법 및 장치

Номер: KR1020170109007A
Автор: 리, 펭
Принадлежит:

... 얼굴 검출 방법은 비디오 영상 시퀀스 취득, 비디오 영상 시퀀스에 대해 비디오 샷 경계 검출 프로세스를 수행하여, 비디오 영상 시퀀스 내에 샷 변화가 존재하는지 판단하여 제1 판정 결과 획득, 및 제1 판정 결과가 비디오 영상 시퀀스 내에 샷 변화가 존재함을 나타낼 때, 얼굴 검출 실패로 판단을 포함한다. 본 개시는 또한 비디오 영상 시퀀스를 취득하도록 구성되는 취득 유닛, 비디오 영상 시퀀스에 대해 비디오 샷 경계 검출 프로세스를 수행하여 비디오 영상 시퀀스 내에 샷 변화가 존재하는지 판단하여 제1 판정 결과를 획득하도록 구성되는 제1 검출 유닛, 및 제1 판정 결과가 비디오 영상 시퀀스 내에 샷 변화가 존재함을 나타낼 때, 얼굴 검출 실패로 판단하도록 구성되는 판단 유닛을 포함하는 얼굴 검출 장치를 제공한다.

Подробнее
21-05-2020 дата публикации

Spoof detection using proximity sensors

Номер: TWI694347B
Принадлежит: EYEVERIFY INC, EYEVERIFY INC.

Подробнее
28-03-2019 дата публикации

IDENTITY AUTHENTICATION METHOD AND APPARATUS, TERMINAL AND SERVER

Номер: SG10201901818UA
Автор: DU ZHIJUN, DU, Zhijun
Принадлежит:

IDENTITY AUTHENTICATION METHOD AND APPARATUS, TERMINAL AND SERVER Disclosed are an identity authentication method and apparatus, a terminal and a server. The method comprises: when a user performs identity authentication, receiving dynamic facial authentication prompt information sent by a server; obtaining pose identification information of the dynamic facial authentication prompt information by identifying a facial pose presented by the user; and sending the pose identification information to the server so that the server determines that the user passes the identity authentication when the server verifies the consistency between the pose identification information and the dynamic facial authentication prompt information. By means of embodiments of the present application, the identity of a user can be authenticated with high security by means of dynamic facial authentication; compared with the existing authentication mode of using an authentication password, authentication information ...

Подробнее
23-10-2014 дата публикации

USER GESTURE CONTROL OF VEHICLE FEATURES

Номер: WO2014172334A1
Принадлежит:

Methods and systems are presented for accepting inputs into a vehicle or other conveyance to control functions of the conveyance. A vehicle control system can receive gestures and other inputs. The vehicle control system can also obtain information about the user of the vehicle control system and information about the environment in which the conveyance is operating. Based on the input and the other information, the vehicle control system can modify or improve the performance or execution of user interface and functions of the conveyance. The changes make the user interfaces and/or functions user-friendly and intuitive.

Подробнее
23-10-2014 дата публикации

INTELLIGENT VEHICLE FOR ASSISTING VEHICLE OCCUPANTS AND INCORPORATING VEHICLE CRATE FOR BLADE PROCESSORS

Номер: WO2014172369A2
Принадлежит:

Methods, systems, and a computer readable medium are provided for maintaining a persona of a vehicle occupant and, based on the persona of the vehicle occupant and vehicle-related information, performing an action assisting the vehicle occupant. Methods, systems, and a computer readable medium are also provided for a vehicle containing multiple blade processors for performing vehicle and/or infotainment tasks, functions, and operations. The blade processors can be included in a crate having a first communication zone defining a trusted network within the vehicle to connect with trusted computational devices and/or modules provided or certified by the vehicle manufacturer but not untrusted computational devices and/or modules provided by vehicle occupants, a second communication zone defining an untrusted network to connect with the untrusted computational devices and/or modules, and a third communication zone providing power and data transmission to the blade processors. A master blade ...

Подробнее
14-12-2017 дата публикации

SYSTEMS AND METHODS FOR IMAGE GENERATION AND MODELING OF COMPLEX THREE-DIMENSIONAL OBJECTS

Номер: US20170358134A1
Принадлежит:

Exemplary embodiments described herein relate to systems and methods for generating an image comprising a three-dimensional (“3D”) model or replica of a subject. Such images may include the face of a human subject as well as views of the subject from various angles.

Подробнее
14-11-2019 дата публикации

RADIO FREQUENCY (RF) OBJECT DETECTION USING RADAR AND MACHINE LEARNING

Номер: US20190349365A1
Принадлежит:

Embodiments described herein can address these and other issues by using radar machine learning to address the radio frequency (RF) to perform object identification, including facial recognition. In particular, embodiments may obtain IQ samples by transmitting and receiving a plurality of data packets with a respective plurality of transmitter antenna elements and receiver antenna elements. I/Q samples indicative of a channel impulse responses of an identification region obtained from the transmission and reception of the plurality of data packets may then be used to identify, with an autoencoder, a physical object in the identification region.

Подробнее
05-06-2018 дата публикации

Creation and geospatial placement of avatars based on real-world interactions

Номер: US0009990373B2
Принадлежит: FORTKORT JOHN A, Fortkort John A.

A method is provided for populating a map with a set of avatars through the use of a mobile technology platform associated with a user. The method (201) includes developing a set of facial characteristics (205), wherein each facial characteristic in the set is associated with one of a plurality of individuals that the user has encountered over a period of time while using the mobile technology platform; recording the locations (207) and times at which each of the plurality of individuals was encountered; forming a first database by associating the recorded times and locations at which each of the plurality of individuals was encountered with the individual's facial characteristics in the set; generating a set of avatars (309) from the set of facial characteristics; and using the first database to populate (319) a map (307) with the set of avatars.

Подробнее
23-06-2020 дата публикации

Method and apparatus for detecting blink

Номер: US0010691940B2

A method and apparatus for detecting a blink. An embodiment includes: extracting two frames of face images from a video recording a face; extracting a first to-be-processed eye image and a second to-be-processed eye image respectively from the two frames of face images, and aligning the first to-be-processed eye image with the second to-be-processed eye image through a set marking point, the marking point being used to mark a set position of an eye image; acquiring a difference image between the aligned first to-be-processed eye image and the second to-be-processed eye image, the difference image being used to represent a pixel difference between the first to-be-processed eye image and the second to-be-processed eye image; and importing the difference image into a pre-trained blink detection model to obtain a blink detection label, the blink detection model being used to match the blink detection label corresponding to the difference image.

Подробнее
14-07-2020 дата публикации

Multi-factor location-based and voice-based user location authentication

Номер: US0010715528B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

A system is provided that determines a location of a user based on various criteria. The system may detect the location of a user based on the location of the user's voice and the location of the user's device, as determined using a beacon signal. The system may process data representing the user's voice and device locations using a model to determine a confidence that a user is at a particular location. Based on the determined location, the system may perform various actions.

Подробнее
10-06-2021 дата публикации

LIVE FACIAL RECOGNITION SYSTEM AND METHOD

Номер: US20210174067A1
Принадлежит:

A live facial recognition method includes capturing a zoom-out image of a face of a subject under recognition; and detecting a frame outside the face of the subject under recognition on the zoom-out image. The subject under recognition is determined to be a living subject when the zoom-out image includes no frame outside the face.

Подробнее
14-06-2005 дата публикации

Image recognition/reproduction method and apparatus

Номер: US0006907140B2

An image recognition/reproduction method includes an extraction step of extracting local feature elements of an image, and a selection step of selecting a pair composed of a prescribed local feature element and position information indicative thereof, this pair being such that the distance between a pair composed of a prescribed local feature element and position information indicative thereof and a pair composed of a local feature element extracted at the extraction step and position information indicative thereof is less than a prescribed distance.

Подробнее
29-03-2011 дата публикации

Image processing method and apparatus

Номер: US0007916971B2

An image processing technique includes acquiring a main image of a scene and determining one or more facial regions in the main image. The facial regions are analysed to determine if any of the facial regions includes a defect. A sequence of relatively low resolution images nominally of the same scene is also acquired. One or more sets of low resolution facial regions in the sequence of low resolution images are determined and analysed for defects. Defect free facial regions of a set are combined to provide a high quality defect free facial region. At least a portion of any defective facial regions of the main image are corrected with image information from a corresponding high quality defect free facial region.

Подробнее
13-05-2008 дата публикации

Statistical facial feature extraction method

Номер: US0007372981B2

A statistical facial feature extraction method is disclosed. In a training phase, N training face images are respectively labeled n feature points located in n different blocks to form N feature vectors. Next, a principal component analysis (PCA) technique is used to obtain a statistical face shape model after aligning each shape vector with a reference shape vector. In an executing phase, initial positions for desired facial features are firstly guessed according to the coordinates of the mean shape for aligned training face images obtained in the training phase, and k candidates are respectively labeled in n search ranges corresponding to above-mentioned initial positions to obtain kn different combinations of test shape vectors. Finally, coordinates of the test shape vector having the best similarity with the mean shape for aligned training face image and the statistical face shape model are assigned as facial features of the test face image.

Подробнее
03-10-2019 дата публикации

Method And Apparatus Of Adaptive Infrared Projection Control

Номер: US20190306441A1
Принадлежит:

Various examples with respect to adaptive infrared (IR) projection control for depth estimation in computer vision are described. A processor or control circuit of an apparatus receives data of an image based on sensing by one or more image sensors. The processor or control circuit also detects a region of interest (ROI) in the image. The processor or control circuit then adaptively controls a light projector with respect to projecting light toward the ROI.

Подробнее
01-09-2016 дата публикации

DYNAMIC VISOR

Номер: US20160253971A1
Принадлежит:

The present invention is a dynamically adjusting visor that adjusts to block bright light sources without blocking other areas of the user's view. The present invention is a transparent display and an image sensor whereby the sensor detects one or more bright light sources and darkens one or more areas of the transparent display corresponding to those bright light sources. Inputs enable a user to adjust the location and size of the dark areas on the display to align the dark areas with the light sources and the user's eyes.

Подробнее
06-02-2020 дата публикации

SUSPICIOUSNESS DEGREE ESTIMATION MODEL GENERATION DEVICE

Номер: US20200042774A1
Принадлежит: NEC Corporation

A suspiciousness degree estimation model generation device includes: a clustering unit that performs clustering on an input face image based on the feature extracted from the face image; and a suspiciousness degree estimation model generation unit that generates a suspiciousness degree estimation model used for estimating the suspiciousness degree of an estimation target person, based on the result of clustering by the clustering unit and suspiciousness degree information that is previously associated with a face image included by the clustering result and that shows the suspiciousness degree of a person shown by the face image. The suspiciousness degree estimation device includes: a feature extraction unit that extracts a feature from a face area of an estimation target person; and a suspiciousness degree estimation unit estimates the suspiciousness degree of the estimation target person, based on the feature extracted by the feature extraction unit and the suspiciousness degree estimation ...

Подробнее
06-02-2020 дата публикации

SYSTEM FOR VERIFYING THE IDENTITY OF A USER

Номер: US20200042773A1
Принадлежит:

A system receives an image including a live facial image of the user and an identity document including a photograph of the user. Moreover, the system calculates a facial match score by comparing facial features in the live facial image to facial features in the photograph. The system recognizes data objects and characters in the identity document using optical character recognition (OCR) and computer vision, and then identifies, based on the recognized data objects and characters, a type of the identity document. Further, the system calculates a document validity score by comparing the recognized characters and data objects to character strings and data objects known to be present in the identified type of the identity document. Additionally, the system determines and outputs the user's identity verification status based on comparing the facial match score to a facial match threshold and comparing the document validity score to a document validity threshold.

Подробнее
18-08-2020 дата публикации

Authentication and authentication mode determination method, apparatus, and electronic device

Номер: US0010747867B2

An authentication method includes: acquiring a front face feature and a side face feature of a first user in response to a face authentication request of the first user; searching, based on the front face feature and the side face feature of the first user, a first list of users of multiple births corresponding to the first user for a candidate user matching both the front face feature and the side face feature of the first user, wherein the first list of users of multiple births corresponding to the first user is a list of users of multiple births with similar front face features and non-similar side face features; and determining, based on consistency between the candidate user and the first user in the front face feature and the side face feature, whether the first user succeeds in authentication.

Подробнее
24-10-2017 дата публикации

Image classification and information retrieval over wireless digital networks and the internet

Номер: US0009798922B2

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

Подробнее
05-03-2019 дата публикации

Thumbnail generation for digital images

Номер: US10222858B2

Detecting a first facial region in a first image. Extracting the detected first facial region. Generating a first facial thumbnail based on the extracted first facial region for use in representing the first image.

Подробнее
05-01-2021 дата публикации

Apparatuses and methods for recognizing object and facial expression robust against change in facial expression, and apparatuses and methods for training

Номер: US0010885317B2

A facial expression recognition apparatus and method and a facial expression training apparatus and method are provided. The facial expression recognition apparatus generates a speech map indicating a correlation between a speech and each portion of an object based on a speech model, extracts a facial expression feature associated with a facial expression based on a facial expression model, and recognizes a facial expression of the object based on the speech map and the facial expression feature. The facial expression training apparatus trains the speech model and the facial expression model.

Подробнее
02-05-2019 дата публикации

MONITORING DEVICE AND MONITORING SYSTEM

Номер: US20190132556A1
Принадлежит:

A monitoring device (2) identifies an object from videos made by a plurality of cameras (1) including a first camera and a second camera and having a predetermined positional relationship. The monitoring device has a receiving unit (21) configured to receive the videos from the plurality of cameras, a storage unit (22b, 22c) configured to store feature information indicating a feature of the object and camera placement information indicating placement positions of the cameras, and a controller (23) configured to identify the object from the videos based on the feature information. If an object has been identifiable from the video made by the first camera but has been unidentifiable from the video made by the second camera, the controller (23) specifies, based on the camera placement information, the object in the video made by the second camera.

Подробнее
09-02-2021 дата публикации

Classroom teaching cognitive load measurement system

Номер: US0010916158B2

The invention provides a classroom cognitive load detection system belonging to the field of education informationization, which includes the following. A task completion feature collecting module records an answer response time and a correct answer rate of a student when completing a task. A cognitive load self-assessment collecting module quantifies and analyzes a mental effort and a task subjective difficulty by a rating scale. An expression and attention feature collecting module collects a student classroom performance video to obtain a face region through a face detection and counting a smiley face duration and a watching duration of the student according to a video analysis result. A feature fusion module fuses aforesaid six indexes into a characteristic vector. A cognitive load determining module inputs the characteristic vector to a classifier to identify a classroom cognitive load level of the student.

Подробнее
21-10-2021 дата публикации

BUILDING SYSTEM WITH SENSOR-BASED AUTOMATED CHECKOUT SYSTEM

Номер: US20210327234A1
Принадлежит:

Example aspects include a method, a system, and a non-transitory computer-readable medium for operating an automated checkout system to be performed by a processing circuit, comprising determining a user account associated with a shopper. The aspects further include receiving, from a sensor, a first indication that an object passed through a location of a building. The first indication having been generated based on a tag coupled with the object. The sensor being configured to detect characteristics of objects. The sensor being located at the location of the building. The aspects further include receiving, from the sensor, a second indication that the shopper associated with the user account passed through the location. Additionally, the aspects further include associating the object with the user account based on the first indication and the second indication.

Подробнее
21-10-2021 дата публикации

TUNABLE MODELS FOR CHANGING FACES IN IMAGES

Номер: US20210327038A1
Принадлежит:

Techniques are disclosed for changing the identities of faces in images. In embodiments, a tunable model for changing facial identities in images includes an encoder, a decoder, and dense layers that generate either adaptive instance normalization (AdaIN) coefficients that control the operation of convolution layers in the decoder or the values of weights within such convolution layers, allowing the model to change the identity of a face in an image based on a user selection. A separate set of dense layers may be trained to generate AdaIN coefficients for each of a number of facial identities, and the AdaIN coefficients output by different sets of dense layers can be combined to interpolate between facial identities. Alternatively, a single set of dense layers may be trained to take as input an identity vector and output AdaIN coefficients or values of weighs within convolution layers of the decoder.

Подробнее
19-04-2018 дата публикации

SYSTEM AND METHOD FOR IMAGE CAPTURE AND MODELING

Номер: US20180107864A1
Принадлежит: Take-Two Interactive Software, Inc.

A system and method for capturing a player's likeness on an in game model at runtime including geometry and texture.

Подробнее
16-10-2014 дата публикации

SHARED NAVIGATIONAL INFORMATION BETWEEN VEHICLES

Номер: US20140309838A1
Принадлежит: Flextronics AP, LLC

A system for vehicle to another party communications that includes a vehicle personality module adapted to create a vehicle personality and a communications system that utilizes the created vehicle personality for one or more communications instead of a user's profile. The one or more communications are associated with one or more of an identifier and an icon representing the vehicle personality, with this identifier and/or icon sent with at least one communication and displayable to the recipient of the communication.

Подробнее
29-12-2016 дата публикации

Surveillance Data Based Resource Allocation Analysis

Номер: US20160379145A1
Принадлежит:

Technologies and implementations for facilitating human resource allocation based, at least in part, on analysis of surveillance data are generally disclosed.

Подробнее
26-01-2017 дата публикации

SYSTEM AND METHOD FOR VIRTUAL TREATMENTS BASED ON AESTHETIC PROCEDURES

Номер: US20170020610A1
Принадлежит:

Systems and methods provide visualization of the projected results of an aesthetic treatment, such as facial skin therapy, using an image display device and a plurality of stored transformation instructions. The system receives an input image of a subject, such as a recent portrait photograph. The system determines the aesthetic treatment to apply, retrieves the associated transformation instructions, and transforms the input image with the transformation instructions to produce a modified image that represents changes to the subject's face that are expected to occur after the selected treatment. The system may include or access a virtual treatment visualization engine that stores transformation parameters describing changes to make to the input image based on the selected treatment and other input parameters. The transformation parameters may be obtained from a model that received the selected treatment. The system may determine similarities of the subject to the model. 1. A computing device for electronically generating virtual treatment results of a subject , the computing device comprising:memory storing program instructions and one or more sets of treatment parameters, each of the one or more sets being associated with a corresponding aesthetic procedure of a plurality of aesthetic procedures; and receive a baseline image depicting a first treatment area of a subject;', 'identify, based on the corresponding treatment parameters of a first set of the one or more sets of treatment parameters, one or more zones of the baseline image, the first set being associated with a first aesthetic procedure of the plurality of aesthetic procedures;', 'transform the baseline image within the one or more zones using the first set of treatment parameters to produce a first post-treatment image that depicts an estimated appearance of the first treatment area after the first treatment area is treated with the first aesthetic procedure; and', 'display the baseline image and the ...

Подробнее
06-01-2022 дата публикации

METHOD FOR FACE RECOGNITION, ELECTRONIC EQUIPMENT, AND STORAGE MEDIUM

Номер: US20220004742A1
Принадлежит:

An ambient light parameter is acquired by detecting ambient light of a surveillance area. When there is a target object in the surveillance area, a movement distance of the target object is detected. When the ambient light parameter meets an ambient light condition and the movement distance of the target object is no less than a distance threshold, the ambient light parameter is changed by adjusting screen brightness of a display device according to the ambient light parameter. A face image of the target object after the ambient light parameter has been changed is acquired. A comparison result of comparing the face image to a preset image is acquired. A face recognition result is acquired according to the comparison result.

Подробнее
05-12-2019 дата публикации

BIOMETRIC FUSION ELECTRONIC LOCK SYSTEM

Номер: US20190371099A1
Автор: Jeff Chen
Принадлежит:

A biometric fusion electronic lock system contains a central processing module including an image processing unit, a voice processing unit, a digital signal processing unit, a logic processing unit, and an interface control unit. An image capturing module is electrically connected with the central processing module. A voice capturing module is electrically connected with the central processing module. A locking/unlocking module is configured to drive a locking latchbolt to lock or unlock the electronic lock system, and the locking/unlocking module is electrically connected with the central processing module. A storage module is set in a storage media of the electronic lock system so as to store facial features and voiceprint data captured by the image capturing module and the voice capturing module respectively. A liquid-crystal display (LCD) module is electrically connected with an interface control unit of the central processing module.

Подробнее
09-11-2017 дата публикации

METHOD AND DEVICE FOR DISPLAYING MULTI-CHANNEL VIDEO

Номер: US20170324921A1
Принадлежит:

A multi-channel video display method and a multi-channel video display device thereof are provided. The multi-channel video display method includes receiving at least two video signals; obtaining position information for at least two users who are in front of a display screen of the electronic apparatus and who are browsing the video signals respectively, wherein each of the users corresponds to a respective video signal; adjusting, according to the position information of each of the users, the direction of a filter of the display screen of the electronic apparatus so that the display direction of the video signal only points toward the corresponding user; and simultaneously displaying the video signals according to the display directions of the video signals.

Подробнее
04-02-2021 дата публикации

METHOD, ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM FOR IMAGE IDENTIFICATION

Номер: US20210034842A1
Принадлежит:

Embodiments of the present disclosure disclose a method, electronic device, and computer readable medium for image identification. The method comprises: acquiring an image comprising a person object for use as an input image; performing feature extraction on the input image using a feature extracting module of a trained human body identification model; and matching an extracted human body feature of the inputted image with a preset human body feature database, to identify the person object in the inputted image, wherein the human body identification model extracting features of human body images captured by cameras of different categories respectively using the feature extracting module, and the human body identification model identifying whether the human body images captured by the cameras of different categories are human body images of a given person based on the extracted features of the human body images. This method improves the accuracy of multi-camera human body re-identification ...

Подробнее
04-02-2021 дата публикации

SPOOFING DETECTION APPARATUS, SPOOFING DETECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Номер: US20210034893A1
Принадлежит: NEC Corporation

A spoofing detection apparatus comprises obtaining, from an image capture apparatus, a first image frame that includes the face of a subject person obtained when a light-emitting apparatus is emitting light and a second image frame that includes the face of the subject person obtained when the light-emitting apparatus is turned off, extracting, from the first image frame, information specifying a face portion of the subject person, and extract, from the second image frame, information specifying a face portion of the subject person, extracting a portion that includes a bright point formed by reflection in an iris region of an eye of the subject person, from the first image frame, extracts a portion corresponding to the portion that includes the bright point, from the second image frame, and calculates a feature that is independent of the position of the bright point, and determining authenticity of subject person based on the feature.

Подробнее
08-08-2019 дата публикации

PHOTO PROCESSING METHOD AND APPARATUS

Номер: US20190244027A1
Принадлежит:

The present disclosure discloses a photo processing method and an apparatus for grouping photos into photo albums based on facial recognition results. The method includes: performing face detection on multiple photos, to obtain a face image feature set, each face image feature in the face image feature set corresponding to one of the multiple photos; determining a face-level similarity for each pair of face image features in the face image feature set; determining a photo-level similarity between each pair of photos in the multiple photos in accordance with their associated face-level similarities; generating a photo set for each target photo in the multiple photos, wherein any photo-level similarity between the target photo and another photo in the photo set exceeds a predefined photo-level threshold; and generating a label for each photo set using photographing location and photographing time information associated with the photos in the photo set.

Подробнее
22-08-2019 дата публикации

ELECTRONIC DEVICE AND CONTROL METHOD THEREOF

Номер: US20190258788A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An electronic apparatus for authenticating a user thereof based on a face angle, a rotational angle of the electronic apparatus, a difference value between the face angle and the rotational angle and a modified face image.

Подробнее
27-05-2021 дата публикации

METHOD FOR PROCESSING IMAGES, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Номер: US20210153629A1
Принадлежит:

A method for processing images includes: detecting a plurality of human face key points of a three-dimensional human face in a target image; acquiring a virtual makeup image, wherein the virtual makeup image includes a plurality of reference key points, the reference key points indicating human face key points of a two-dimensional human face; and acquiring a target image fused with the virtual makeup image by fusing the virtual makeup image and the target image with each of the reference key points in the virtual makeup image aligned with a corresponding human face key point.

Подробнее
12-01-2016 дата публикации

Simultaneous video streaming across multiple channels

Номер: US0009235941B2

Methods and systems for a media controller subsystem that can provide video streaming using a distributed network control server, media server, and virtual network console on a common processing or circuit board and filter and apply restrictions to media content based on one or more of the identity of the vehicle occupant requesting media content, the identity of a portable computational device associated with the vehicle occupant, and the spatial location of the vehicle occupant and/or remote computational device.

Подробнее
18-04-2013 дата публикации

Apparatus and method for detecting specific object pattern from image

Номер: US20130094709A1
Принадлежит: Canon Inc

A face area is detected from an image captured by an image pickup device, pixel values of the image are adjusted based on information concerning the detected face area, a person area is detected from the adjusted image, and the detected face area is integrated with the detected person area. With this configuration, it is possible to accurately detect an object even in a case, for example, where the brightness is varied.

Подробнее
02-05-2013 дата публикации

FACE RECOGNITION APPARATUS AND METHOD FOR CONTROLLING THE SAME

Номер: US20130108123A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A face recognition apparatus and face recognition method perform face recognition of a face by comparing an image of the face to be identified with target images for identification. The face recognition apparatus includes an image input unit to receive an image of a face to be identified, a sub-image production unit to produce a plurality of sub-images of the input face image using a plurality of different face models, a storage unit to store a plurality of target images, and a face recognition unit to set the sub-images to observed nodes of a Markov network, to set the target images to hidden nodes of the Markov network, and to recognize the presence of a target image corresponding to the face images to be identified using a first relationship between the observed nodes and the hidden nodes and a second relationship between the hidden nodes. 1. A face recognition apparatus comprising:an image input unit to receive an image of a face to be identified;a sub-image production unit to produce a plurality of sub-images of the input face image using a plurality of different face models; anda face recognition unit to set the sub-images to observed nodes of a Markov network, to set target images to hidden nodes of the Markov network using the Markov network, and to recognize the presence of a target image corresponding to the face image to be identified using a first relationship between the observed nodes and the hidden nodes and a second relationship between the hidden nodes.2. The face recognition apparatus according to claim 1 , further comprising a storage unit to store the plurality of target images.3. The face recognition apparatus according to claim 2 , wherein the first relationship between the observed node and the hidden node is based on a similarity between the sub-image and the target image claim 2 , andthe second relationship between the hidden nodes is based on a similarity between the target images.4. The face recognition apparatus according to claim 3 , ...

Подробнее
23-05-2013 дата публикации

FACE IMAGE REGISTRATION DEVICE AND METHOD

Номер: US20130129160A1
Принадлежит: Panasonic Corporation

A face image registration device includes an image input unit that inputs face images of a subject person, and an others face retention unit that retains a plurality of others faces. The device further includes: a false alarm characteristic calculation unit that collates the face images of the subject person with the retained others faces, and calculates a false alarm characteristic of the face images of the subject person; a correct alarm characteristic calculation unit that collates the face images of the subject person with each other to calculate a correct alarm characteristic of the face images of the subject person; and a registration face image selection unit that selects a registration face image from the face images of the subject person by using the false alarm characteristic of the face images of the subject person and the correct alarm characteristic of the face images of the subject person. 1. A face image registration device comprising:an image input unit that inputs a plurality of face images of a subject person;an others face retention unit that retains a plurality of others faces;a false alarm characteristic calculation unit that collates the face images of the subject person with the others faces retained in the others face retention unit, and calculates a false alarm characteristic of the face images of the subject person;a correct alarm characteristic calculation unit that collates the plurality of face images of the subject person with each other to calculate a correct alarm characteristic of the face images of the subject person; anda registration face image selection unit that selects a registration face image from the plurality of face images of the subject person by using the false alarm characteristic of the face images of the subject person and the correct alarm characteristic of the face images of the subject person.2. The face image registration device according to claim 1 ,wherein the false alarm characteristic calculation unit ...

Подробнее
23-05-2013 дата публикации

RECOMMENDATION SYSTEM BASED ON THE RECOGNITION OF A FACE AND STYLE, AND METHOD THEREOF

Номер: US20130129210A1
Автор: Na Seung Won
Принадлежит: SK PLANET CO., LTD.

The present disclosure relates to a recommendation system based on the recognition of a face and style, and method thereof. More particularly, face and style feature information is extracted from a user image, face and style characteristics are recognized from the extracted face and style feature information, and then recommendation style information (for example, a hair style, a make-up style, product information, or the like) matched with the recognized face and style characteristics is searched in a recommendation style table templated in advance according to characteristics to thereby be recommend, such that recommendation style information most appropriately matched with user's face and style may be rapidly and easily recommended. 1. A recommendation system based on the recognition of face and style , comprising:a user terminal transmitting a user image through a communication network or extracting face and style feature information from the user image to transmit the extracted face and style feature information through the communication network; anda recommendation device templating recommendation style information matched with face and style characteristics to generate a recommendation style table, recognizing the face and style characteristics from the user image transmitted from the user terminal or the face and style feature information transmitted from the user terminal, and searching recommendation style information matched with the recognized face and style characteristics in the generated recommendation style table to transmit the searched recommendation style information to the user terminal.2. A recommendation device based on the recognition of face and style , comprising:a face recognition unit configured to extract face feature information from a user image transmitted from a user terminal and recognize face characteristics using the extracted face feature information, or recognize the face characteristics using face feature information transmitted ...

Подробнее
13-06-2013 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20130148853A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An image processing apparatus and method may accurately separate only humans among moving objects, and also accurately separate even humans who have no motion via human segmentation using a depth data and face detection technology. The apparatus includes a face detecting unit to detect a human face in an input color image, a background model producing/updating unit to produce a background model using a depth data of an input first frame and face detection results, a candidate region extracting unit to produce a candidate region as a human body region by comparing the background model with a depth data of an input second or subsequent frame, and to extract a final candidate region by removing a region containing a moving object other than a human from the candidate region, and a human body region extracting unit to extract the human body region from the candidate region. 1. An image processing apparatus comprising:a face detecting unit to detect a human face in an input image;a background model producing/updating unit to produce a background model using a depth data of an input first frame and detection results of the face detecting unit;a candidate region extracting unit to produce a candidate region that will serve as a human body region by comparing the background model with a depth data of an input second or subsequent frame, and to extract a final candidate region by removing a region, which is judged using the detection results of the face detecting unit as containing a moving object other than a human, from the candidate region; anda human body region extracting unit to extract the human body region from the candidate region that is extracted from the depth data of the current frame using a human body region extracted from the depth data of the previous frame.2. The apparatus according to claim 1 , wherein the human body region extracting unit includes:a first calculator to search the human body region via implementation of tracking using hard constraint ...

Подробнее
27-06-2013 дата публикации

METHOD AND APPARATUS FOR INFORMATION PROCESSING

Номер: US20130163832A1
Автор: MURAKAMI Masatoshi
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to one of embodiments, taking in facial image data for a user, extracting feature points of the face of the user from the facial image data and then coding the feature points, and registering the facial image data on a destination management table so that the facial image data is associated with a special identifier and/or the coded feature points. In particular, the face of the sender is incorporated into a video mail, so that the exchange of addresses (faces) is facilitated and the reliability of the e-mail itself can be enhanced. Further, a face can be automatically registered on an address book at a receiving end. In this case, a similar face can be updated by the latest face. 1. An information processing method comprising:obtaining image data on a face of a user; andgenerating address information by extracting feature points of the face of the user from the image data and using the feature points.2. The information processing method of claim 1 , further comprising:generating a code or a character string by using the feature points; andincluding the code or character string in the address information.3. The information processing method of claim 1 , wherein the address information comprises an email address.4. The information processing method of claim 1 , further comprising:updating registered image data corresponding to the address information based on obtained image data, if the obtained image data differs from the registered image data and if an updating request is made.5. The information processing method of claim 1 , further comprising:updating registered image data corresponding to the address information based on obtained image data, if the obtained image data differs from the registered image data or if an updating request is made.6. The information processing method of claim 1 , further comprising:generating second address information, with obtained image data corresponding to the second address information being regarded as new image data, ...

Подробнее
27-06-2013 дата публикации

MIRROR CONTROLLER AND COMPUTER READABLE STORAGE MEDIUM

Номер: US20130163877A1
Автор: Morishita Youji
Принадлежит: Denso Corporation

A mirror controller includes: a face position detector for, analyzing a face image of a driver, and for detecting face position information; an angle calculator for calculating an optimum angle of a mirror of a vehicle according to the face position information so that the driver looks at a predetermined region via the mirror; and a controller for controlling an angle adjuster to adjust the angle of the mirror to be the optimum angle when a driving speed detected by a speed sensor is equal to or smaller than a predetermined threshold value, or when a face direction detected by a face direction detector is different from a direction toward the mirror. 1. A mirror controller comprising:a face position detector for analyzing an image of a face of a driver in a vehicle, and for detecting face position information, which is indicative of at least one of a position of the face and a position of a part of the face;an angle calculator for calculating an optimum angle of a mirror of the vehicle according to the face position information so that the driver looks at a predetermined region via the mirror with the optimum angle; anda controller for controlling an angle adjuster to adjust the angle of the mirror to be the optimum angle when a driving speed detected by a speed sensor is equal to or smaller than a predetermined threshold value, or when a face direction detected by a face direction detector is different from a direction toward the mirror.2. The mirror controller according to claim 1 , further comprising:a position calculator for calculating an average of the face position information according to a record of the face position information, which is obtained in a predetermined time interval,wherein the angle calculator calculates the optimum angel of the mirror according to the average of the face position information.3. The mirror controller according to claim 1 ,wherein, after the angle adjuster adjusts the angle of the mirror to be the optimum angle, the controller ...

Подробнее
11-07-2013 дата публикации

Face Data Acquirer, End User Video Conference Device, Server, Method, Computer Program And Computer Program Product For Extracting Face Data

Номер: US20130177219A1
Автор: Fröjdh Per, Ström Jacob
Принадлежит: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)

A face data acquirer includes an image capture module arranged to capture an image from a video stream of a video conference. A face detection module is arranged to determine a subset of the image, the subset representing a face. An identity acquisition module is arranged to acquire an identity of a video conference participant coupled to the face represented by the subset of the image. A face extraction module is arranged to extract face data from the subset of the image and to determine whether to store the extracted face data for subsequent face recognition. A corresponding end user video conference device, server, method, computer program and computer program product are also provided. 140. A face data acquirer () comprising:{'b': 43', '76, 'an image capture module () arranged to capture an image () from a video stream of a video conference;'}{'b': 44', '75', '76, 'a face detection module () arranged to determine a subset () of the image (), the subset representing a face;'}{'b': 45', '75', '76, 'an identity acquisition module () arranged to acquire an identity of a video conference participant coupled to the face represented by the subset () of the image (); and'}{'b': 46', '75', '76, 'a face extraction module () arranged to extract face data from the subset () of the image () and to determine whether to store the extracted face data for subsequent face recognition.'}24046. The face data acquirer () according to claim 1 , wherein the image analysis module () is arranged to determine to store the extracted face when an accuracy of face detection is determined to be improved when the extracted face data is stored compared to refraining from storing the extracted face data.34046. The face data acquirer () according to or claim 1 , wherein the image analysis module () is arranged to determine to store the extracted face data when a measurement of difference of the extracted face data compared to a set of previously stored extracted face data for the same identity ...

Подробнее
18-07-2013 дата публикации

Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition

Номер: US20130182920A1

A technique for determining a characteristic of a face or certain other object within a scene captured in a digital image including acquiring an image and applying a linear texture model that is constructed based on a training data set and that includes a class of objects including a first subset of model components that exhibit a dependency on directional lighting variations and a second subset of model components which are independent of directional lighting variations. A fit of the model to the face or certain other object is obtained including adjusting one or more individual values of one or more of the model components of the linear texture model. Based on the obtained fit of the model to the face or certain other object in the scene, a characteristic of the face or certain other object is determined.

Подробнее
25-07-2013 дата публикации

FEELING-EXPRESSING-WORD PROCESSING DEVICE, FEELING-EXPRESSING-WORD PROCESSING METHOD, AND FEELING-EXPRESSING-WORD PROCESSING PROGRAM

Номер: US20130188835A1
Принадлежит: NEC Corporation

The present approach enables an impression of the atmosphere of a scene or an object present in the scene at the time of photography to be pictured in a person's mind as though the person were actually at the photographed scene. A feeling-expressing-word processing device has: a feeling information calculating unit for analyzing a photographed image, and calculating feeling information which indicates a situation of a scene portrayed in the photographed image or a condition of an object present in the scene; and a feeling-expressing-word extracting unit for extracting, from among feeling-expressing words which express feelings and are stored in a feeling-expressing-word database in association with the feeling information, a feeling-expressing word which corresponds to the feeling information calculated by the feeling information calculating unit 1. A feeling-expressing-word processing device comprising:a feeling information calculating unit for analyzing a photographed image, detecting a face appearing in the image and a finger appearing in the image, and calculating feeling information which indicates a situation of a scene shown in a photographed image or a condition of an object present in the scene on the basis of the face and the finger in the image; anda feeling-expressing-word extracting unit for extracting, from among feeling-expressing words which express feelings and are stored in advance in association with the feeling information, a feeling-expressing word which corresponds to the feeling information calculated by the feeling information calculating unit.2. The feeling-expressing-word processing device according to claim 1 , whereinthe feeling information calculating unit calculates the feeling information which at least includes any of the number of faces, a tilt of the face, a degree of a smile, and the number of fingers.3. The feeling-expressing-word processing device according to claim 2 , whereinwhen the feeling information includes the number of ...

Подробнее
08-08-2013 дата публикации

APPARATUS FOR REAL-TIME FACE RECOGNITION

Номер: US20130202159A1

Disclosed herein is a real-time face recognition apparatus and method. A real-time face recognition apparatus includes a face detection unit for detecting a face image by obtaining image coordinates of a face from an input image. An eye detection unit obtains image coordinates of both eyes in the face image. A facial feature extraction unit generates feature histogram data based on parallel processing from the face image. A DB unit stores predetermined comparative feature histograms. A histogram matching unit compares the histogram data generated by the facial feature extraction unit with the comparative feature histograms, and then outputting similarities of face images. The face recognition apparatus may be implemented as internal hardware in which a VGA camera and an exclusive chip interface with each other, thus remarkably reducing a system size and installation cost, and performing face recognition in real time without requiring additional equipment. 1. A real-time face recognition apparatus comprising:a face detection unit for detecting a face image by obtaining image coordinates of a face from an input image;an eye detection unit for obtaining image coordinates of both eyes in the face image detected by the face detection unit;a facial feature extraction unit for generating feature histogram data based on parallel processing from the face image detected by the face detection unit;a database (DB) unit for storing predetermined comparative feature histograms; anda histogram matching unit for comparing the histogram data generated by the facial feature extraction unit with the comparative feature histograms stored in the DB unit, and then outputting similarities of the face image.2. The real-time face recognition apparatus of claim 1 , wherein the facial feature extraction unit comprises:a face normalization unit for downscaling the face image based on the coordinates of both eyes obtained by the eye detection unit;a convolution filtering operation unit for ...

Подробнее
08-08-2013 дата публикации

SUBJECT DETERMINATION APPARATUS THAT DETERMINES WHETHER OR NOT SUBJECT IS SPECIFIC SUBJECT

Номер: US20130202163A1
Принадлежит: CASIO COMPUTER CO., LTD.

A subject determination apparatus includes: an image obtaining unit, first and second similarity degree determination units, an information obtaining unit, and a subject determination unit. The second similarity degree determination unit determines whether a similarity degree between a reference image and an image of a candidate region of a specific subject image in one of frame images sequentially obtained by the image obtaining unit is equal to or more than a second threshold value smaller than a first threshold value if the similarity degree is determined by the first similarity degree determination unit to be less than the first threshold value. The information obtaining unit obtains information indicating a similarity degree between the reference image and an image of a region corresponding to the candidate region in another frame image obtained a predetermined number of frames before the one frame image. 1. A subject determination apparatus comprising:an image obtaining unit that sequentially obtains frame images;a first similarity degree determination unit that determines whether or not a similarity degree between a predetermined reference image and an image of a candidate region of a specific subject image in one of the frame images obtained by the image obtaining unit is equal to or more than a first threshold value;a second similarity degree determination unit that determines whether or not the similarity degree is equal to or more than a second threshold value smaller than the first threshold value in a case where it is determined by the first similarity degree determination unit that the similarity degree is not equal to or more than the first threshold value;an information obtaining unit that obtains information related to a similarity degree between the predetermined reference image and an image of a region, the region corresponding to the candidate region in another frame image obtained a predetermined number of frames before from the one frame image, ...

Подробнее
22-08-2013 дата публикации

INFORMATION PROCESSING APPARATUS, METHOD, AND PROGRAM

Номер: US20130216109A1
Принадлежит: Omron Corporation

A migratory ratio of a customer is obtained to support a marketing strategy related to attracting customers. A population extraction unit extracts the number of persons, in which a game of one of the models of currently-installed amusement machines is recorded, as the number of persons of a population from pieces of information included in a biological information database. A migratory ratio calculation result output unit calculates a ratio of the number of persons, who use a model except the models of the amusement machines in which the population is obtained in the currently-installed amusement machines in the pieces of information included in the biological information database, to the population as the migratory ratio. The present invention can be applied to an apparatus that analyzes a trend of customers. 1. An information processing apparatus comprising:storage for storing a face image as a face image of an accumulator in an accumulator database;an obtaining unit for obtaining a face image of a matching target person who uses or purchases one of a plurality of articles together with identification information identifying the article that is used or purchased by the matching target person;a matching unit for performing matching by calculating a degree of similarity between the face image of the matching target person, which is obtained by the obtaining unit, and the face image of the accumulator, which is stored in the storage;similarity determination unit for determining whether the face image of the matching target person is the face image of the accumulator by comparing the degree of similarity, which is a matching result of the matching unit, to a predetermined threshold;a recorder for recording detection of the accumulator, which is of the matching target person, in the accumulator database together with the identification information while correlating the detection of the accumulator with the face image of the accumulator, when the similarity ...

Подробнее
05-09-2013 дата публикации

ATTRIBUTE DETERMINING METHOD, ATTRIBUTE DETERMINING APPARATUS, PROGRAM, RECORDING MEDIUM, AND ATTRIBUTE DETERMINING SYSTEM

Номер: US20130230217A1
Автор: Ueki Kazuya
Принадлежит: NEC SOFT, LTD.

The present invention is to provide an attribute determining method, an attribute determining apparatus, a program, a recording medium, and an attribute determining system of high detection accuracy of a person with which an attribute of a person can be determined, for example, even in the case where characteristic parts of the face are hidden. 1. An attribute determining method comprising:an image acquiring step of acquiring an image of a person to be determined;an attribute determination region detecting step of detecting at least two attribute determination regions selected from the group consisting of a head region, a facial region, and other regions from the image of a person to be determined; andan attribute determining step of determining an attribute based on images of the at least two attribute determination regions.2. The method according to claim 1 , wherein the other regions include a whole-body and a part of the whole-body.3. The method according to claim 1 , wherein claim 1 , in the attribute determining step claim 1 , an attribute is determined by combining the images of the at least two attribute determination regions.4. The method according to claim 3 , wherein claim 3 , in the combination of the images claim 3 , whether or not the at least two attribute determination regions belong to the same person is determined using an overlap degree represented by the following equation (3).{'br': None, 'i': ×X×Y', 'X+Y, 'overlap degree=(2)/()\u2003\u2003(3)'}X: Ratio of the area of a region in which one of the attribute determination regions and a region obtained by deforming an attribute determination region other than the one of the attribute determination regions at a predetermined ratio is overlapped to the area of the one of the attribute determination regionsY: Ratio of the area of the overlapped region to the area of the region obtained by deforming at a predetermined ratio5. The method according to claim 4 , wherein in the case where the overlap ...

Подробнее
05-09-2013 дата публикации

METHOD OF FACIAL IMAGE REPRODUCTION AND RELATED DEVICE

Номер: US20130230252A1
Принадлежит: CYBERLINK CORP.

To modify a facial feature region in a video bitstream, the video bitstream is received and a feature region is extracted from the video bitstream. An audio characteristic, such as frequency, rhythm, or tempo is retrieved from an audio bitstream, and the feature region is modified according to the audio characteristic to generate a modified image. The modified image is outputted. 1 a system I/O interface configured to receive an audio signal and a video signal;', 'a processor configured to encode the audio signal and the video signal; and', 'a network interface configured to transmit the encoded signal; and, 'a transmitting computing device comprising a network interface configured to receive the encoded signal from the transmitting computing device;', 'a processor configured to decode the encoded signal to retrieve the audio signal and the video signal, determine an audio characteristic of the audio signal, extract a facial feature region from an image of the video signal, and modify the extracted facial feature region of the image of the video signal to express human emotion changes indicated by the audio characteristic to generate a modified image; and', 'a display interface configured to output the modified image., 'a receiving computing device comprising. A communication system comprising: This application is a division of U.S. patent application Ser. No. 12/211,807, filed Sep. 16, 2008, and included herein by reference in its entirety for all intents and purposes.1. Field of the InventionThe present invention relates to video processing, and more particularly, to a method of modifying a feature region of an image according to an audio signal.2. Description of the Prior ArtWeb cameras are devices that typically include an image capturing device with a good refresh rate, and optionally a microphone for recording sound in the form of voice or ambient noise. The web camera is usually connected to a computing device, such as a personal computer or notebook computer ...

Подробнее
12-09-2013 дата публикации

Method of facial image reproduction and related device

Номер: US20130236102A1
Принадлежит: CyberLink Corp

To modify a facial feature region in a video bitstream, the video bitstream is received and a feature region is extracted from the video bitstream. An audio characteristic, such as frequency, rhythm, or tempo is retrieved from an audio bitstream, and the feature region is modified according to the audio characteristic to generate a modified image. The modified image is outputted.

Подробнее
17-10-2013 дата публикации

HUMAN HEAD DETECTION IN DEPTH IMAGES

Номер: US20130272576A1
Автор: He Zhixiang, Hu Wei
Принадлежит: Intel Corporation

Systems, devices and methods are described including receiving a depth image and applying a template to pixels of the depth image to determine a location of a human head in the depth image. The template includes a circular shaped region and a first annular shaped region surrounding the circular shaped region. The circular shaped region specifies a first range of depth values. The first annular shaped region specifies a second range of depth values that are larger than depth values of the first range of depth values. 130.-. (canceled)31. A computer-implemented method for detecting a human head in an image , comprising:receiving a depth image; andapplying a template to pixels of the depth image to determine a location of a human head in the depth image, wherein the template includes a circular shaped region and a first annular shaped region surrounding the circular shaped region, the circular shaped region specifying a first plurality of depth values, the first annular shaped region specifying a second plurality of depth values, the second plurality of depth values including depth values larger than the first plurality of depth values.32. The method of claim 31 , wherein the template specifies a second annular shaped region surrounding the first annular shaped region claim 31 , the second annular shaped region specifying a third plurality of depth values claim 31 , and the third plurality of depth values including depth values larger than the second plurality of depth values.33. The method of claim 32 , wherein the first plurality of depth values are associated only with points in the template lying within the circular shaped region claim 32 , wherein the second plurality of depth values are associated only with points in the template lying within the first annular shaped region claim 32 , and wherein the third plurality of depth values are associated only with points in the template lying within the second annular shaped region.34. The method of claim 32 , wherein a ...

Подробнее
07-11-2013 дата публикации

Face Detection Method

Номер: US20130294688A1
Принадлежит: ST-Ericsson SA

A method for detecting faces in an image having a plurality of picture elements, each having a plurality of colour components in a predetermined colour space, includes determining an extended range for colour component values, in the colour space, in which a skin tone area is likely to be detected, defining intervals for the colour component values, in the colour space, covering at least part of the extended range, and scanning each of the intervals to detect a skin tone area. If a skin tone area is detected, the method includes selecting the intervals in which a skin tone area is detected, defining candidate limited ranges for colour component values, in the colour space, from the selected intervals, performing face detection on a skin tone area in at least some of the candidate limited ranges, and selecting a chosen candidate limited range based on the number of faces detected.

Подробнее
12-12-2013 дата публикации

DIGITAL CAMERA SYSTEM

Номер: US20130329029A1
Принадлежит: NIKON CORPORATION

A digital camera system capable of operating by detecting a feature point, which has not been accomplished, in addition to ordinary functions of a conventional camera is provided. According to an aspect of the present invention, a digital camera system includes a detecting means that detects a given feature point from an image data, a receiving means that receives an order from a user, a selecting means that selects each feature point in accordance with a given order instructed by the receiving means when a plurality of feature points are detected, and a display that displays feature point information identifying the feature point selected by the selecting means. 1. A digital camera system comprising:a detecting unit that detects a given feature point from image data;a receiving unit that receives an order regarding a selection order of a plurality of feature points from a user;a selecting unit that selects and switches each feature point in accordance with a given order instructed by the receiving unit when a plurality of feature points are detected; anda display that displays feature point information identifying the feature point selected by the selecting unit.2. The digital camera system according to claim 1 , wherein the display displays information regarding the feature point overlaid with the image data.3. The digital camera system according to further comprising:a face detection unit that detects a size of a face from the feature point detected by the detecting unit, whereinthe selecting unit selects the face in descending order of the face size detected by the face detection unit.4. The digital camera system according to further comprising:a distance detection unit that detects a distance to the feature point detected by the detecting unit, whereinthe selecting unit selects the feature point in ascending order of the distance detected by the distance detection unit.5. The digital camera system according to further comprising:a focus-area-setting unit that ...

Подробнее
26-12-2013 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND CONTROL PROGRAM

Номер: US20130343647A1
Автор: Aoki Hiromatsu
Принадлежит: Omron Corporation

An image processing device that identifies a characteristic of a lip from a face image including a mouth of a person has a representative skin color determination unit that determines a representative color of a skin in the face image, a candidate color determination unit that sets a plurality of regions in the face image such that at least one of the regions contains a part of the lip, and determines representative colors of the regions as candidate colors, and a representative lip color determination unit that determines a representative color of the lip from the plurality of candidate colors, in accordance with a difference in hue and saturation between the representative color of the skin and each candidate color. 1. An image processing device that identifies a characteristic of a lip from a face image including a mouth of a person , the image processing device comprising:a representative skin color determination unit that determines a representative color of a skin in the face image;a candidate color determination unit that sets a plurality of regions in the face image such that at least one of the regions contains a part of the lip, and determines representative colors of the regions as candidate colors; anda representative lip color determination unit that determines a representative color of the lip from the plurality of candidate colors, in accordance with a difference in hue and saturation between the representative color of the skin and each candidate color.2. The image processing device according to claim 1 , wherein the representative lip color determination unit determines the representative color of the lip claim 1 , in accordance with a difference in hue and saturation claim 1 , other than brightness or lightness claim 1 , between the representative color of the skin and each candidate color.3. The image processing device according to claim 1 , wherein the representative lip color determination unit determines the representative color of the lip ...

Подробнее
26-12-2013 дата публикации

IMAGE CREATING DEVICE, IMAGE CREATING METHOD AND RECORDING MEDIUM

Номер: US20130343656A1
Принадлежит:

Disclosed is an image creating device including a first obtaining unit which obtains an image including a face, a first extraction unit which extracts a face component image relating to main components of the face in the image and a direction of the face, a second obtaining unit which obtains a face contour image associated to the face in the image and a second extraction unit which extracts a direction of a face contour in the face contour image. The image creating device further includes a converting unit which converts at least one of the face component image and the face contour image based on the both directions of the face and the face contour and a creating unit which creates a portrait image by using at least one of the face component image and the face contour image being converted by the converting unit. 1. An image creating device , comprisinga first obtaining unit which obtains an image including a face;a first extraction unit which extracts a face component image relating to main components of the face in the image obtained by the first obtaining unit and a direction of the face;a second obtaining unit which obtains a face contour image associated to the face in the image obtained by the first obtaining unit;a second extraction unit which extracts a direction of a face contour in the face contour image obtained by the second obtaining unit;a converting unit which converts at least one of the face component image and the face contour image based on the both directions of the face and the face contour; anda creating unit which creates a portrait image by using at least one of the face component image and the face contour image being converted by the converting unit.2. The image creating device as claimed in claim 1 , whereinthe converting unit converts the face contour image as if the face contour image is rotated centering around an axis, the axis extending in a predetermined direction according to the both directions of the face and of face contour.3. ...

Подробнее
02-01-2014 дата публикации

REDUCED IMAGE QUALITY FOR VIDEO DATA BACKGROUND REGIONS

Номер: US20140003662A1
Принадлежит:

Systems, apparatus, articles, and methods are described including operations to detect a face based at least in part on video data. A region of interest and a background region may be determined based at least in part on the detected face. The background region may be modified to have a reduced image quality. 1. A computer-implemented method , comprising:detecting a face based at least in part on video data;determining a region of interest and a background region based at least in part on the detected face; andmodifying the background region to have a reduced image quality.2. The method of claim 1 , further comprising capturincapturing the video data in real-time.3. The method of claim 1 , wherein the detection of the face comprises detecting two or more faces.4. The method of claim 1 , wherein the detection of the face comprises detecting the face based at least in part on a Viola-Jones-type framework.5. The method of claim 1 , wherein the reducing of the image quality associated with the background region comprises applying a blurring effect to the background region.6. The method of claim 1 , wherein the reducing of the image quality associated with the background region comprises applying a blurring effect to the background region based at least in part on a Point Spread Function and noise model.7. The method of claim 1 , further comprising applying a blending effect to a transition area claim 1 , wherein the transition area is located at a border between the region of interest and the background region.8. The method of claim 1 , further comprising applying a blending effect to a transition area claim 1 , wherein the transition area is located at a border between the region of interest and the background region claim 1 , and wherein the blending effect comprises an alpha-type blending effect claim 1 , feathering-type blending effect claim 1 , and/or a pyramid-type blending effect.9. The method of claim 1 , further comprising encoding the video data including the ...

Подробнее
09-01-2014 дата публикации

INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF

Номер: US20140010450A1
Принадлежит:

This invention provides a technique which can enhance personal recognition precision in personal recognition processing of a face in an image. To this end, a management unit classifies feature patterns each including feature information of a plurality of parts of a face region of an object extracted from image data, and manages the feature patterns using a dictionary. A segmenting unit determines whether or not feature information of each part of the face region of the object is segmented, and segments the feature information of the part of interest into a plurality of feature information as new feature information. A registration unit registers a feature pattern as a combination of the new feature information of the part of interest and feature information of parts other than the part of interest in the dictionary as a new feature pattern of the object. 1. An apparatus comprising:a management unit configured to classify feature patterns each including feature information of a plurality of parts of a face region of an object extracted from image data for respective objects, and to manage the feature patterns using a dictionary;a segmenting unit configured to determine whether or not a feature information of each part of the face region of the object is configured to be segmented, and to segment, when said segmenting unit determines that the feature information is configured to be segmented, the feature information of the part of interest into a plurality of feature information as new feature information; anda registration unit configured to register, when said segmenting unit segments the feature information, a feature pattern as a combination of the new feature information of the part of interest and feature information of parts other than the part of interest, which are managed by said management unit, in the dictionary as a new feature pattern of the object.2. The apparatus according to claim 1 , wherein said management unit includes not less than two feature ...

Подробнее
20-02-2014 дата публикации

AUTHENTICATION APPARATUS THAT AUTHENTICATES OBJECT, AUTHENTICATION METHOD, AND STORAGE MEDIUM

Номер: US20140050373A1
Автор: Kiyosawa Kazuyoshi
Принадлежит: CANON KABUSHIKI KAISHA

An authentication apparatus capable of reducing erroneous authentication. A face detection section detects a face area of an object from an image. A feature information extraction processor extracts feature information (image data) indicative of a feature of the object. An authentication determination section performs authentication by comparing registered image data and feature information of a specific object. A registration information processor determines, when one of objects associated with registered image data items is selected as an object to which the feature information of the specific object is to be added, whether or not to additionally register the feature information of the specific object as image data for the selected object, according to a degree of similarity between image data of the selected object and the feature information of the specific object. 1. An authentication apparatus that includes a storage section for storing image data items each indicative of a feature of a specific area of each of a plurality of objects , and authenticates an object in an image as a specific object , using the image data item , comprising:a specific area detection unit configured to detect a specific area of the specific object from the image;a feature information extraction unit configured to extract a feature information item indicative of a feature of the specific area from the specific area of the specific object;an authentication unit configured to compare the image data item and the feature information item, to authenticate the specific object according to a result of the comparison; anda registration determination unit configured, before the feature information item of the specific object is additionally registered in the storage section as the image data item, if one of the plurality of objects is selected as a selected object to which the feature information item of the specific object is to be added, to determine whether or not to additionally register ...

Подробнее
27-02-2014 дата публикации

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Номер: US20140056529A1
Принадлежит: Omron Corporation

An image processing device for identifying a characteristic of an eye from a face image comprising: a first differentiation unit configured to differentiate an eye region in at least a vertical direction of the eye to obtain a first luminance gradient; a first edge extraction unit configured to extract a first edge point according to the first luminance gradient; and a curve identification unit configured to identify a curve, which is a B-spline curve or a Bezier curve expressed by a control point and both end points and fits to the first edge point, as a curve expressing an upper-eyelid or lower-eyelid outline, the end points being an inner corner point of eye and a tail point of eye, by voting for the control point that is a voting target with respect to the first edge point using the Hough transform. 1. An image processing device for identifying a characteristic of an eye from a face image of a person , the image processing device comprising:a first differentiation unit configured to differentiate an eye region where the eye of the face image exists in at least a vertical direction of the eye to obtain a first luminance gradient;a first edge extraction unit configured to extract a first edge point in the eye region according to the first luminance gradient; anda curve identification unit configured to identify a curve, which is expressed by a control point and both end points and fits to the first edge point, as a curve expressing an upper-eyelid or lower-eyelid outline, the end points being an inner corner point of eye and a tail point of eye,wherein the curve expressed by both the end points and the control point is a B-spline curve or a Bezier curve,wherein the curve identification unit is configured to vote for the control point that is a voting target with respect to the first edge point using the Hough transform, and identify the B-spline curve or the Bezier curve expressed by both the end points and at least one of the control point, which fits to the ...

Подробнее
03-04-2014 дата публикации

Method And System For Attaching A Metatag To A Digital Image

Номер: US20140093141A1
Принадлежит: Facedouble, Inc.

A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website such as www.facebook.com to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database. 1. A method , for tagging an image of an individual , comprising:storing on a server supporting a service usable by multiple users through respective remote user computing devices accessing the service over a distributed network a plurality of reference photos comprising identified facial images of individual users of the service and, for each of a plurality of identified facial images, an identity of the individual and at least one reference feature vector generated from the facial images to provide a plurality of stored reference feature vectors, the server comprising at least one processor that accesses at least one storage media and being programmed with executable instructions;processing, by the server, an unknown facial image of an individual in a subject photo to generate a subject feature vector for the individual in the subject photo;determining, by the server, coordinates defining a position of the unknown facial image in the subject photo;determining, by the server, an identity of the unknown facial image, wherein the determining comprises comparing the subject feature vector to one or more stored reference feature vectors using a matching algorithm; andupon determining an identity of the unknown facial image, tagging, by the server, the subject photo to identify the facial image of the individual,wherein the tagging comprises storing in a storage media the coordinates defining a position of the facial image in the subject photo and an identifier for the individual, the coordinates and identifier for the individual being ...

Подробнее
01-01-2015 дата публикации

Touch Free User Recognition Assembly For Activating A User's Smart Toilet's Devices

Номер: US20150000026A1
Принадлежит:

The system may use a touch free device to identify a user. The system allows the user to operate a smart toilet's devices with the passive input of their presence, in the toilet area. The user may be identified using, facial recognition, eye recognition, or proximity card devices. The computer associates the identified user with the user's pre stored identity and preprogrammed user profile. The user's profile may contain the user's desired smart toilet device settings. Toilet devices settings and sequence of operation may include a bidet's water temperature, a toilet seat position, an internet connection to the computer, etc. Operating the toilet touch freely, reduces the user's physical contact with the toilet control panel and toilet seat. This may reduce the transference of bacteria from the toilet to the user. This may reduce the possibly of illness caused by bacteria. Other embodiments are described and shown. 1. A smart toilet operated by identifying a user , comprising ,(a) a computer(b) the computer operably coupled to a touch free user information collecting device, for touch free identifying of a user, for sending the user identity information to the computer,(c) a software programmed into the computer for associating the received user identity information with a user's stored identity information,for associating the users identity with operating instructions for the toilet devices,for activating the user's stored toilet device's operating instructions,(d) the computer connected to a toilet device, for sending the user's device operating instructions to the toilet device from the computer,(e) a toilet connected to the toilet device,whereby the user may touch freely use their identity to operate the toilet devices.2. The smart toilet of wherein the touch free user information collecting device is a touch free facial recognition device comprising a camera and a facial recognition software programed into the computer claim 1 ,and wherein the toilet device is ...

Подробнее
03-01-2019 дата публикации

DEVICE, METHOD AND COMPUTER PROGRAM PRODUCT FOR CONTINUOUS MONITORING OF VITAL SIGNS

Номер: US20190000330A1
Принадлежит:

A wearable device for continuous health monitoring, the device comprising: a band for conforming to a first body part of a subject; an imaging unit, the imaging unit being connected in the band, wherein the imaging unit is configured to acquire a sequence of images from the subject's body, wherein the device is operable in a contact mode and in a non-contact mode, i. wherein in the contact mode the imaging unit is in substantial close proximity to the first body part of the subject so as to acquire the sequence of images of an area of the first body part; wherein in the non-contact mode the imaging unit is in a remote position to acquire the sequence of images of a second body part of the subject; a controller unit configured to derive a PPG signal from the acquired sequence of images according to a first process when the device is in the contact mode and according to a second process when the device is in the non-contact mode, the PPG signal being indicative of the health of the subject; wherein the controller unit, during operation of the wearable device, is configured to check at least one pre-determined condition in order to determine if the PPG signal is to be derived according to the first process or the second process. 1. A wearable device for continuous health monitoring , the device comprising:a. a band for conforming to a first body part of a subject; i. wherein in the contact mode the imaging unit is in substantial close proximity to the first body part of the subject so as to acquire the sequence of images of an area of the first body part;', 'ii. wherein in the non-contact mode the imaging unit is in a remote position to acquire the sequence of images of a second body part of the subject;, "b. an imaging unit, the imaging unit being connected in the band, wherein the imaging unit is configured to acquire a sequence of images from the subject's body, wherein the device is operable in a contact mode and in a non-contact mode,"}c. a controller unit ...

Подробнее
02-01-2020 дата публикации

DRIVER STATE ESTIMATION DEVICE AND DRIVER STATE ESTIMATION METHOD

Номер: US20200001880A1
Принадлежит: Omron Corporation

A driver state estimation device which can estimate a distance to a driver's head position without detecting a center position of the driver's face area in an image, includes a camera, a lighting part, and a CPU including a face detecting section for detecting the driver's face in a first image picked up at the time of light irradiation from the lighting part and in a second image picked up at the time of no light irradiation from the lighting part, a face brightness ratio calculating section for calculating a brightness ratio between the driver's face in the first image and that in the second image, and a distance estimating section for estimating a distance from the driver's head to the camera using the calculated face brightness ratio. 1. A driver state estimation device for estimating a state of a driver using picked-up images , comprising:an imaging section for imaging a driver sitting in a driver's seat;a lighting part for irradiating a face of the driver with light;a table information storing part for storing one or more tables for distance estimation showing a correlation of a brightness ratio between the face of the driver in an image picked up by the imaging section at the time of light irradiation from the lighting part and the face of the driver in an image picked up by the imaging section at the time of no light irradiation from the lighting part with a distance from a head of the driver sitting in the driver's seat to the imaging section; andat least one hardware processor, a face detecting section for detecting the face of the driver in a first image picked up by the imaging section at the time of light irradiation from the lighting part and in a second image picked up by the imaging section at the time of no light irradiation from the lighting part,', 'a face brightness ratio calculating section for calculating a brightness ratio between the face of the driver in the first image and the face of the driver in the second image, detected by the face ...

Подробнее
06-01-2022 дата публикации

BUILDING MANAGEMENT ROBOT AND METHOD OF PROVIDING SERVICE USING THE SAME

Номер: US20220005303A1
Принадлежит: LG ELECTRONICS INC.

A building management robot includes a communication unit configured to recognize an identification device corresponding to a first divided space among at least one divided space in a building and acquire first identification information of the first divided space from the identification device, a camera configured to acquire image data including a position where the identification device is recognized, and a processor configured to recognize a user from the image data, confirm an authentication level of the first divided space of the recognized user from a database, and provide the user with a service based on the confirmed authentication level. 1. A building management robot comprising:a communication unit configured to recognize an identification device corresponding to a first divided space among at least one divided space in a building and acquire first identification information of the first divided space from the identification device;a camera configured to acquire image data including a position where the identification device is recognized; anda processor configured to recognize a user from the image data, confirm an authentication level of the first divided space of the recognized user from a database, and provide the user with a service based on the confirmed authentication level.2. The building management robot according to claim 1 ,wherein the communication unit includes at least two wireless communication modules spaced apart from in the building management robot, andwherein the processor:detects a position or direction of the identification device based on a difference in intensity or time between signals respectively received from the identification device through the at least two wireless communication modules,controls a traveling unit or a camera direction adjustment mechanism to face the detected position or direction, andcontrols the camera to acquire the image data.3. The building management robot according to claim 1 , wherein the processor: ...

Подробнее
01-01-2015 дата публикации

CONTENT RECEIVING DEVICE, CONTENT RECEIVING METHOD AND DIGITAL BROADCAST TRANSMITTING AND RECEIVING SYSTEM

Номер: US20150003809A1
Принадлежит:

Provided is a technology related to digital broadcast that effectively uses metadata. 1. A content receiving device for receiving broadcast waves , comprising:a receiving unit which receives the broadcast waves;an input unit which inputs a user input;a recording unit which records content included in the broadcast waves on a recording medium; anda control unit,wherein character facial data to be information enabling identification of faces of characters of the content is included in the broadcast waves, andthe control unit generates searching facial data to be information enabling identification of faces of persons, on the basis of information input to the input unit, and controls recording of the content on the recording medium, on the basis of the searching facial data and the character facial data.2. The content receiving device according to claim 1 , wherein the control unit extracts at least feature data showing features for parts of the faces of the persons from the searching facial data and the character facial data and performs control to select the content recorded on the recording medium claim 1 , on the basis of the feature data extracted from the searching facial data and the feature data extracted from the character facial data.3. The content receiving device according to claim 1 , further comprising:an output unit which outputs the recorded content,wherein the control unit performs control to select the content output from the output unit, on the basis of the searching facial data and the character facial data, when a plurality of contents is recorded on the recording medium.4. The content receiving device according to claim 1 , wherein the character facial data and the searching facial data are still image data enabling recognition of the faces of the persons.5. A content receiving method in a content receiving device for receiving broadcast waves claim 1 , comprising:a reception step of receiving the broadcast waves;an input step of inputting a user ...

Подробнее
07-01-2016 дата публикации

Chemical Compositon And Its Devlivery For Lowering The Risks Of Alzheimer's Cardiovascular And Type -2 Diabetes Diseases

Номер: US20160004298A1
Принадлежит:

Chemical compositions of bioactive compounds and/or bioactive molecules for lowering the risks of Alzheimer's, Cardiovascular and Diabetes diseases are described. Targeted, passive and programmable/active deliveries of the bioactive compounds and/or bioactive molecules are described. Many embodiments of various subsystems for detection of disease specific biomarkers/an array of disease specific biomarkers and programmable/active delivery of the bioactive compounds and/or bioactive molecules in near real-time/real-time are also described. A portable internet appliance, a portable internet cloud appliance and an augmented reality personal assistant subsystem are also described along with various applications. 1. A subsystem , as an augmented reality personal assistant comprising: a camera , a display , a projector , a first sensor and a decoder , wherein the first sensor is configured to read an item or a person in a user's field of view and wherein the decoder is configured to convert the said reading of the item or the person into a text or an image.2. The subsystem claim 1 , as an augmented reality personal assistant according to claim 1 , further comprises: a multi-spectral band camera claim 1 , wherein the multi-spectral band encompasses visible claim 1 , near-infrared and infrared part of the optical spectrum.3. The subsystem claim 1 , as an augmented reality personal assistant according to claim 1 , further comprises: a second sensor selected from the group consisting of: an eye motion sensor claim 1 , a gesture sensor and a touch sensor.4. The subsystem claim 1 , as an augmented reality personal assistant according to claim 1 , further comprises: a component selected from the group consisting of: a contact lens and a contact lens integrated with a nanostructure.5. The subsystem claim 1 , as an augmented reality personal assistant according to claim 1 , further comprises: a component selected from the group consisting of: a microphone and an audio recording ...

Подробнее
05-01-2017 дата публикации

METHOD AND SYSTEM FOR EXACTING FACE FEATURES FROM DATA OF FACE IMAGES

Номер: US20170004353A1

A method and a system for exacting face features from data of face images have been disclosed. The system may comprise: A first feature extraction unit configured to filter the data of face images into a first plurality of channels of feature maps with a first dimension and down-sample the feature maps into a second dimension of feature maps; a second feature extraction unit configured to filter the second dimension of feature maps into a second plurality of channels of feature maps with a second dimension, and to down-sample the second plurality of channels feature maps into a third dimension of feature maps; and a third feature extraction unit configured to filter the third dimension of feature maps so as to further reduce high responses outside the face region such that reduce intra-identity variances of face images, while maintain discrimination between identities of the face images. 1. A method for exacting face features from data of face images , comprising:1) filtering the data of face images into a first plurality of channels of feature maps with a first dimension;2) computing each of the maps by rule of σ(x)=max(0,x), where x represents feature maps with the first dimension;3) down-sampling the computed feature maps into a second dimension of feature map;4) filtering the down-sampled maps into a second plurality of channels of feature maps with a second dimension;5) computing each of the maps with the second dimension by rule of σ(x)=max(0,x), where x represents the second plurality of channels of feature maps;6) down-sampling the computed second plurality of channels feature maps into a third dimension of feature maps; and7) filtering each map of the third dimension of feature maps so as to reduce high responses outside the face region,whereby, intra-identity variances of the face images are reduced and discrimination between identities of the face images are maintained.2. A method according to claim 1 , wherein the step 1) further comprises:filtering the ...

Подробнее
07-01-2016 дата публикации

FACIAL TRACKING WITH CLASSIFIERS

Номер: US20160004904A1
Принадлежит:

Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face. 1. A computer-implemented method for facial detection comprising:obtaining a video that includes a face; performing facial landmark detection within the first frame from the video; and', 'estimating a rough bounding box for the face based on the facial landmark detection;, 'performing face detection to initialize locations for a first set of facial landmarks within a first frame from the video wherein the face detection comprisesrefining the locations for the first set of facial landmarks based on localized information around the first set of facial landmarks; andestimating future locations for landmarks within the first set of facial landmarks for a future frame from the first frame.2. The method of wherein the estimating of the future locations for the landmarks is based on a velocity for one or more of the locations.3. The method of wherein the estimating of the future locations for the landmarks is based on an angular velocity for one or more of the locations.4. The method of further comprising providing an output for a facial detector based on the estimating of the future ...

Подробнее
07-01-2016 дата публикации

Image Analysis Device, Image Analysis System, and Image Analysis Method

Номер: US20160005171A1
Принадлежит: Hitachi, Ltd.

An image analysis device according to the present invention includes a storage unit storing an image and information of a detected object included in the image, an input unit receiving a target image serving as a target in which an object is detected, a similar image search unit searching for a similar image having a feature quantity similar to a feature quantity extracted from the target image and the information of the object included in the similar image from the storage unit, a parameter deciding unit deciding a parameter used in a detection process performed on the target image based on the information of the object included in the similar image, a detecting unit detecting an object from the target image according to the decided parameter, a registering unit accumulating the target image in the storage unit, and a data output unit outputting the information of the detected object. 1. An image analysis device , comprising:an image storage unit that stores an image and information of a detected object included in the image;an image input unit that receives a target image serving as a target in which an object is detected;a similar image search unit that searches for a similar image having a feature quantity similar to a feature quantity extracted from the target image and the information of the detected object included in the similar image from the image storage unit;a parameter deciding unit that decides a parameter used in a detection process performed on the target image based on the information of the detected object included in the similar image;a detecting unit that detects an object from the target image according to the decided parameter;an image registering unit that accumulates the detected object and the target image in the image storage unit; anda data output unit that outputs the information of the detected object.2. The image analysis device according to claim 1 ,wherein the information stored in the image storage unit includes a feature quantity ...

Подробнее
03-01-2019 дата публикации

DISPLAY VIEWING POSITION SETTINGS BASED ON USER RECOGNITIONS

Номер: US20190004570A1

In one example, an electronic device is described, which includes a position activator, a database including display positions associated with a plurality of users, and a processor coupled to the position activator and the database. The processor may retrieve a display position corresponding to a user operating the electronic device from the database and trigger the position activator to set a viewing position of a display of the electronic device based on the retrieved display position. 1. An electronic device comprising:a position activator;a database comprising display positions associated with a plurality of users; and retrieve a display position corresponding to a user operating the electronic device from the database; and', 'trigger the position activator to set a viewing position of a display of the electronic device based on the retrieved display position., 'a processor coupled to the position activator and the database, wherein the processor is to2. The electronic device of claim 1 , wherein the position activator is to adjust a height of the display claim 1 , a viewing angle of the display claim 1 , or a combination thereof.3. The electronic device of claim 2 , wherein the position activator is to adjust a horizontal viewing angle of the display claim 2 , adjust a vertical viewing angle of the display claim 2 , rotate the display in clockwise or counter clockwise direction along an X-Y plane claim 2 , or a combination thereof.4. The electronic device claim 1 , further comprising a user recognition engine to recognize the user operating the electronic device using a facial recognition process claim 1 , a gesture recognition process claim 1 , a speech recognition process claim 1 , or a voiceprint analysis process.5. The electronic device of claim 1 , further comprising:a supporting platform connected to the position activator and the display of the electronic device, and wherein the position activator is to set the viewing position of the display based on ...

Подробнее
04-01-2018 дата публикации

SYSTEM, APPARATUS, METHOD, PROGRAM AND RECORDING MEDIUM FOR PROCESSING IMAGE

Номер: US20180004773A1
Принадлежит: SONY CORPORATION

An image processing system may include an imaging device for capturing an image and an image processing apparatus for processing the image. The imaging device may include an imaging unit for capturing the image, a first recording unit for recording information relating to the image, the information being associated with the image, and a first transmission control unit for controlling transmission of the image to the image processing apparatus. The image processing apparatus may include a reception control unit for controlling reception of the image transmitted from the imaging device, a feature extracting unit for extracting a feature of the received image, a second recording unit for recording the feature, extracted from the image, the feature being associated with the image, and a second transmission control unit for controlling transmission of the feature to the imaging device. 1. (canceled)2. An information processing system comprising:a first information processing apparatus and a second information processing apparatus; capturing an image by an imaging device, and', 'transmitting the image to the second information processing apparatus; and, 'wherein the first information processing apparatus includes at least one first processor configured to control extracting a feature of the image by image analysis;', 'generating metadata including feature information based on the feature extracted from the image,, 'wherein the second information processing apparatus includes at least one second processor configured to control 'transmitting, to a device different from the first and second information processing apparatuses, information related to the metadata,', 'associating the metadata with the image; and'}wherein the transmitting is for controlling searching images, on the device, based on the information related to the metadata and displaying a result of the searching.3. The information processing system of claim 2 ,wherein the at least one first processor or the at ...

Подробнее
02-01-2020 дата публикации

HEALTH STATISTICS AND COMMUNICATIONS OF ASSOCIATED VEHICLE USERS

Номер: US20200004791A1
Автор: Ricci Christopher P.
Принадлежит:

Methods and systems for a complete vehicle ecosystem are provided. Specifically, systems that when taken alone, or together, provide an individual or group of individuals with an intuitive and comfortable vehicular environment. The present disclosure includes a system that provides various outputs based on a user profile and determined context. An output provided by the present disclosure can change a configuration of a vehicle, device, building, and/or a system associated with the user profile. The configurations can include comfort and interface settings that can be adjusted based on the user profile information. Further, the user profiles can track health data related to the user and make adjustments to the configuration to assist the health of the user. 1. A method , comprising:detecting a presence of at least one user in a vehicle;determining an identity of the at least one user;receiving data associated with the at least one user, wherein the data includes biometric information;detecting a deviation between the received data and an established baseline biometric profile associated with the at least one user; anddetermining, based at least partially on the detected deviation, to provide an output configured to address the deviation.2. The method of claim 1 , wherein prior to receiving data associated with the at least one user the method further comprises:determining the baseline biometric profile associated with the at least one user; andstoring the determined baseline biometric profile in a user profile memory associated with the at least one user.3. The method of claim 1 , wherein determining the presence of the at least one user inside the vehicle further comprises:detecting a person via at least one image sensor associated with the vehicle.4. The method of claim 3 , wherein determining the identity of the at least one user further comprises:identifying facial features associated with the person detected via the at least one image sensor; anddetermining ...

Подробнее
07-01-2016 дата публикации

AVATAR ANIMATION, SOCIAL NETWORKING AND TOUCH SCREEN APPLICATIONS

Номер: US20160005206A1
Принадлежит:

Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition. 123-. (canceled)24. An apparatus to animate avatars , comprising:a frame monitor to detect a condition with respect to one or more frames of a video signal associated with a set of facial motion data;a motion module to modify, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data; andan avatar module to initiate an avatar animation based on the modified set of facial motion data.25. The apparatus of claim 24 , wherein the condition is to be one or more of a buffer overflow condition and a tracking failure condition.26. The apparatus of claim 24 , wherein the avatar module further includes:a smoothing module to apply a smoothing process to the one or more frames to obtain replacement facial motion data for the one or more frames;a snapshot module to identify a plurality of avatar images based on the modified set of facial motion data and the replacement facial motion data; andan animation module to generate the avatar animation based on the plurality of avatar images and an audio signal associated with the video signal.27. The apparatus of claim 24 , wherein the avatar module is to send the modified set of facial motion data and an audio signal associated with the video signal to a remote server.28. The apparatus of claim 27 , further including a tone module to identify a voice tone setting based on user input claim 27 , wherein the avatar module is to send the voice tone setting to the remote server.29 ...

Подробнее
07-01-2021 дата публикации

Methods And Systems For Recognizing And Reading A Coded Identification Tag From Video Imagery

Номер: US20210004552A1
Автор: Son Dihn Tien
Принадлежит:

Methods and systems for quickly and accurately identifying a coded identification tag imaged by conventional CCTV video monitoring equipment are presented herein. In one aspect, a coded identification tag includes a plurality of dark-colored polygons arranged around a light-colored central background area to maximize contrast between the polygons and the central background area. An array of dark-colored dots is arranged over the light-colored central background area. A light-colored border is located around the plurality of dark-colored polygons. A Coded Identification Tag Monitoring (CITM) system estimates the position and orientation of the coded identification tag with respect to the collected image based on the unique orientation of the coded identification tag with respect to an image frame. In some examples, the CITM system decodes the coded identification tag when the tag occupies less than 10% of the area of the image collected by the video imaging system. 1. A Coded Identification Tag Monitoring (CITM) system comprising: an image sensor including a plurality of pixels, the image sensor generating electrical signals associated with a sequence of images, each image indicative of an amount of light incident on the plurality of pixels;', 'imaging optics that image light over a field of view of the video imaging device onto the image sensor; and', receive the electrical signals from the video imaging device associated with the sequence of images; and', 'identify a coded identification tag within a first image of the sequence of images, wherein the coded identification tag occupies less than ten percent of an area of the image;', 'estimate an orientation of the coded identification tag with respect to the first image; and', 'identify a coded number associated with the coded identification tag., 'a computing system configured to], 'a video imaging device comprising2. The CITM system of claim 1 , the coded identification tag comprising:a plurality of dark-colored ...

Подробнее
07-01-2021 дата публикации

METHOD AND APPARATUS FOR PREDICTING FACE BEAUTY GRADE, AND STORAGE MEDIUM

Номер: US20210004570A1
Принадлежит: WUYI UNIVERSITY

A method for predicting a face beauty grade includes the following steps of: acquiring a beautiful face image of a face beauty database, preprocessing the beautiful face image, and extracting a beauty feature vector of the beautiful face image, the preprocessing unifying data of the beautiful face image; recognizing continuous features of samples of the same type in a feature space by using a bionic pattern recognition model, and classifying the beauty feature vector to obtain a face beauty grade prediction model; and collecting a face image to be recognized, and inputting the face image to be recognized into the face beauty grade prediction model to predict a face beauty grade and obtain the beauty grade of the face image to be recognized. 1. A method for predicting a face beauty grade , comprising following steps of:acquiring a beautiful face image from a face beauty database, preprocessing the beautiful face image, and extracting a beauty feature vector of the beautiful face image;classifying the beauty feature vector by using a bionic pattern recognition model to obtain a face beauty grade prediction model trained; andcollecting a face image to be recognized, inputting the face image to be recognized into the face beauty grade prediction model to predict a face beauty grade and obtain the beauty grade of the face image to be recognized.2. The method of claim 1 , wherein the step of acquiring the beautiful face image of the face beauty database claim 1 , preprocessing the beautiful face image claim 1 , and extracting the beauty feature vector of the beautiful face image further comprises steps of:acquiring the beautiful face image of the face beauty database, and extracting a beautiful face key point of the beautiful face image by using a neural network;preprocessing the beautiful face image according to the beautiful face key point to obtain a normalized standard beautiful face image; andprocessing the standard beautiful face image by using a width learning ...

Подробнее
04-01-2018 дата публикации

SYSTEM AND METHOD FOR FACE RECOGNITION USING THREE DIMENSIONS

Номер: US20180005018A1

A system for facial recognition comprising at least one processor; at least one input operatively connected to the at least one processor; a database configured to store three-dimensional facial image data comprising facial feature coordinates in a predetermined common plane; the at least one processor configured to locate three-dimensional facial features in the image of the subject, estimate three-dimensional facial feature location coordinates in the image of the subject, obtain the three-dimensional facial feature location coordinates and orientation parameters in a coordinate system in which the facial features are located in the predetermined common plane; and compare the location of the facial feature coordinates of the subject to images of people in the database; whereby recognition, comparison and/or likeness of the facial images is determined by comparing the predetermined common plane facial feature coordinates of the subject to images in the database. A method is also disclosed. 1. A method of facial recognition comprising:inputting image data representing a plurality of images from a database; the database comprising images of people wherein the location of the three dimensional facial features is defined relative to a predetermined common plane;inputting an image of a subject to be identified;locating predetermined three-dimensional facial features in the image of the subject for comparison to the image data from the database;estimating three-dimensional facial feature location coordinates of the subject head in the image of the subject;obtaining the three-dimensional facial feature location coordinates and orientation parameters in a coordinate system in which the facial features are located in the predetermined common plane;comparing the location of the coordinates of the subject to the locations of the coordinates of the images of people in the database relative to the predetermined common plane; anddetermining the identity of the subject.2. The ...

Подробнее
02-01-2020 дата публикации

METHOD, TERMINAL, AND STORAGE MEDIUM FOR TRACKING FACIAL CRITICAL AREA

Номер: US20200005022A1
Автор: WANG Chengjie
Принадлежит:

Method, terminal, and storage medium for tracking facial critical area are provided. The method includes accessing a frame of image in a video file; obtaining coordinate frame data of a facial part in the image; determining initial coordinate frame data of a critical area in the facial part according to the coordinate frame data of the facial part; obtaining coordinate frame data of the critical area according to the initial coordinate frame data of the critical area in the facial part; accessing an adjacent next frame of image in the video file; obtaining initial coordinate frame data of the critical area in the facial part for the adjacent next frame of image by using the coordinate frame data of the critical area in the frame; and obtaining coordinate frame data of the critical area for the adjacent next frame of image according to the initial coordinate frame data thereof. 119.-. (canceled)20. A facial critical area tracking method , comprising:accessing a frame of image in a video file;obtaining a first coordinate frame data of a facial part in the image by detecting a position of the facial part in the frame of the image; aligning a center of a pre-stored critical area with a center of a coordinate frame of the facial part by shifting the pre-stored critical area with respect to the coordinate frame of the facial part; and', 'zooming a size of the pre-stored critical area to match a size of the coordinate frame of the facial part;, 'determining a first initial coordinate frame data of a critical area in the facial part according to the first coordinate frame data of the facial part, comprisingobtaining a second coordinate frame data of the critical area according to the first initial coordinate frame data of the critical area in the facial part;accessing an adjacent next frame of image in the video file; andobtaining a second initial coordinate frame data of the critical area in the facial part for the adjacent next frame of image by using the second ...

Подробнее
02-01-2020 дата публикации

Human facial detection and recognition system

Номер: US20200005024A1
Автор: Marcos Silva
Принадлежит: BLUE LINE SECURITY SOLUTIONS LLC

Aspects of the present disclosure provide an image-based face detection and recognition system that processes and/or analyzes portions of an image using “image strips” and cascading classifiers to detect faces and/or various facial features, such an eye, nose, mouth, cheekbone, jaw line, etc.

Подробнее
02-01-2020 дата публикации

Systems and Methods of Person Recognition in Video Streams

Номер: US20200005079A1
Принадлежит:

The various implementations described herein include systems and methods for recognizing persons in video streams. In one aspect, a method includes: (1) obtaining a live video stream; (2) detecting person(s) in the stream; and (3) determining, from analysis of the live video stream, first information of the detected person(s); (4) determining, based on the first information, that the first person is not known to the computing system; (5) in accordance with the determination that the first person is not known: (a) storing the first information; and (b) requesting a user to classify the first person; and (6) in accordance with a determination that a response was received classifying the first person as a stranger, deleting the stored first information. 1. A method comprising: obtaining a live video stream;', 'detecting a first person in the live video stream;', 'determining, from analysis of the live video stream, first information that identifies an attribute of the first person;', 'determining, based on at least some of the first information, that the first person is not a known person to the computing system;', storing at least some of the first information; and', 'requesting a user to classify the first person; and, 'in accordance with the determination that the first person is not a known person, 'in accordance with (i) a determination that a response was not received from the user, or (ii) a determination that a response was received from the user classifying the first person as a stranger, deleting the stored first information., 'at a computing system having one or more processors and memory2. The method of claim 1 , wherein determining the first information comprises:selecting one or more images of the first person from the live video stream; andcharacterizing a plurality of features of the first person based on the one or more images.3. The method of claim 2 , further comprising:identifying a pose of the first person in each of the one or more images; andfor ...

Подробнее
03-01-2019 дата публикации

METHOD AND SYSTEM FOR PREDICTING PERSONALITY TRAITS, CAPABILITIES AND SUGGESTED INTERACTIONS FROM IMAGES OF A PERSON

Номер: US20190005359A1
Принадлежит: Faception Ltd.

The invention relates to a method of predicting personality characteristic from images of a subject person's face, comprising: a) collecting training images of multiple persons for training propose, the images associated with metadata characteristics of human personality; b) grouping the collected training images into training groups; c) training at least one image-based classifier to predict at least one characteristics of human personality from at least one image of a second person; and d) applying the at least one image-based classifier to at least one image of the subject person for outputting a prediction of at least one human personality characteristic of the subject person. 1. A method of adapting computerized interactions according to an analysis of at least one facial image , comprising:providing at least one facial image imaging a face of an individual;applying at least one image-based classifier on the at least one facial image for identifying at least one trait value of at least one human personality trait of said individual, the at least one image-based classifier is generated by applying at last one machine learning algorithm on a plurality of training images imaging faces of multiple individuals;adapting a computerized interaction with the individual according to a combination of the at least one trait value and demographic data relating to the individual.2. The method of claim 1 , wherein the at least one trait valuecomprises a at least one trait value of a plurality of human personality traits.3. The method of claim 1 , wherein adapting comprising modifying an avatar according to the combination the at least one trait value and demographic data relating to the individual.4. The method of claim 1 , wherein adapting comprising adapting a member of a group consisting of: a video chat claim 1 , a conference call claim 1 , a wearable computer action claim 1 , a client relationship management (CRM) action claim 1 , a set-top box action claim 1 , and a ...

Подробнее
02-01-2020 дата публикации

GUIDE ROBOT AND METHOD FOR OPERATING THE SAME

Номер: US20200005787A1
Автор: MAENG Jichan, SHIN Wonho
Принадлежит: LG ELECTRONICS INC.

The present disclosure relates to a guide robot and a method of operating the same. A guide robot according to the present disclosure includes a voice receiving unit to receive a voice, a controller to determine whether the received voice includes a preset wake-up word, and a wireless communication unit to perform communication with an artificial intelligence (AI) server set to be activated by the preset wake-up word. At this time, the control unit transmits the received voice to the artificial intelligence server, receives result information from the artificial intelligence server, and outputs the received result information, when the received voice includes the preset wake-up word. And, the control unit outputs a response voice selected according to a predetermined reference when the received voice does not include the preset wake-up word. 1. A guide robot , comprising:a voice receiving unit configured to receive a voice;a control unit configured to determine whether the received voice includes a preset wake-up word; anda wireless communication unit configured to perform communication with an artificial intelligence (AI) server set to be activated by the preset wake-up word,wherein the control unit transmits the received voice to the artificial intelligence server, receives result information from the artificial intelligence server, and output the received result information, when the received voice includes the preset wake-up word, andoutputs a response voice selected according to a predetermined reference when the received voice does not include the preset wake-up word.2. The guide robot of claim 1 , wherein the control unit performs a greeting recognition operation when the received voice does not include the preset wake-up word claim 1 , and determines whether the received voice is recognized as a greeting based on a sensing signal received from at least one sensor in the greeting recognition operation.3. The guide robot of claim 2 , wherein the control unit ...

Подробнее
02-01-2020 дата публикации

COMMUNICATION ROBOT AND METHOD FOR OPERATING THE SAME

Номер: US20200005794A1
Автор: KIM Gyeong Hun
Принадлежит:

A communication robot capable of communicating with other electronic devices and an external server in a 5G communication environment by performing artificial intelligence (AI) algorithms and/or machine learning algorithms to be loaded and performing a speech recognition, and a driving method thereof are disclosed. The method for driving a communication robot according to an exemplary embodiment of the present disclosure may include receiving an utterance speech uttered by a user who has approached within a predetermined distance from the communication robot, and selecting any one ASR module capable of processing the uttered speech among plural ASR modules as an optimized ASR module. According to the present disclosure, it is possible to improve user's satisfaction with the use of the communication robot by reducing the inconvenience that the user has to manually set a first language in the preprocessing operation in order to receive a service from the communication robot. 1. A method for driving a communication robot disposed at an arbitrary place comprising:receiving an utterance speech uttered by a user who has approached within a predetermined distance from the communication robot; andselecting any one ASR module capable of processing the uttered speech among plural ASR (auto speech recognition) modules as an optimized ASR module.2. The method according to claim 1 ,wherein the receiving includes receiving the whole of the utterance speech uttered by the user.3. The method according to claim 1 ,wherein the receiving includes receiving a part of the whole of the utterance speech uttered by the user.4. The method according to claim 1 ,wherein the receiving includes receiving a first language uttering speech uttered by the user.5. The method according to claim 4 ,wherein the selecting step includes selecting a first language ASR module corresponding to a first language uttered by the user as the optimized ASR module among the plural ASR modules6. The method ...

Подробнее
03-01-2019 дата публикации

METHODS, APPARATUS AND ARTICLES OF MANUFACTURE TO USE BIOMETRIC SENSORS TO CONTROL AN ORIENTATION OF A DISPLAY

Номер: US20190005620A1
Автор: Tripp Jeffrey M.
Принадлежит:

Methods, systems and articles of manufacture for a portable electronic device to change an orientation in which content is displayed on a display device of the portable electronic device based on a facial image. Example portable electronic devices include a display device, an image sensor to capture a facial image of a user of the portable electronic device, an orientation determination tool to determine a device orientation relative to the user based on the facial image of the user, and an orientation adjustment tool. The orientation adjustment tool changes a content orientation in which the display device of the portable electronic device presents content based on the determination of the device orientation. 1. A portable electronic device comprising:a display device;an image sensor to capture a facial image of a user of the portable electronic device;an orientation determination tool to determine a device orientation relative to the user based on the facial image of the user; andan orientation adjustment tool to change a content orientation in which the display device of the portable electronic device presents content based on the determination of the device orientation.2. The portable electronic device of claim 1 , wherein the image sensor is to capture the facial image of the user when the portable electronic device is in a locked mode and the content includes a request for entry of user authentication information.3. The portable electronic device of claim 2 , further including a motion sensor claim 2 , the motion sensor to send a notification to the image sensor when motion is sensed claim 2 , the image sensor to capture the facial image in response to the notification.4. The portable electronic device of claim 1 , wherein the image sensor is to capture the facial image of the user when the portable electronic device is in a locked mode claim 1 , and the orientation adjustment tool is to use the facial image to determine whether to unlock the portable ...

Подробнее
03-01-2019 дата публикации

APPARATUS HAVING A DIGITAL INFRARED SENSOR

Номер: US20190005642A1
Принадлежит: Arc Devices, LTD

An apparatus that senses temperature from a digital infrared sensor is described. A digital signal representing a temperature without conversion from analog is transmitted from the digital infrared sensor received by a microprocessor and converted to body core temperature by the microprocessor. 1. A device comprising: a microprocessor;', 'a battery that is operably coupled to the microprocessor;', 'a display device that is operably coupled to the microprocessor;', 'a first digital interface that is operably coupled to the microprocessor;, 'a first circuit board including a second digital interface, the second digital interface being that is operably coupled to the first digital interface; and', 'a digital infrared sensor that is operable to receive an infrared signal, the digital infrared sensor also being operably coupled to the second digital interface, the digital infrared sensor having ports that provide digital readout signals that are representative of the infrared signal that is received by the digital infrared sensor,, 'a second circuit board includingwherein the microprocessor is operable to receive from the ports of the digital infrared sensor the digital readout signals that are representative of the infrared signal and the microprocessor is operable to determine a temperature from the digital readout signals that are representative of the infrared signal, andwherein no analog-to-digital converter is operably coupled between the digital infrared sensor and the microprocessor.2. The device of wherein the display device further comprises:a green traffic light operable to indicate that the temperature is good;an amber traffic light operable to indicate that the temperature is low; anda red traffic light operable to indicate that the temperature is high.3. The device of further comprising:the digital infrared sensor having no analog sensor readout ports.4. The device of further comprising:a camera that is operably coupled to the microprocessor and providing ...

Подробнее
05-01-2017 дата публикации

METHOD AND APPARATUS FOR AUTOFOCUS AREA SELECTION BY DETECTION OF MOVING OBJECTS

Номер: US20170006211A1
Автор: Gurbuz Sabri
Принадлежит: SONY CORPORATION

An improved mechanism for image area selection upon which autofocusing is directed during image capture on a digital image capture device, such as a camera or cellular phone. This image area selection provides accurate selection of moving object even when the objects being focused upon are subject to intense or unpredictable motion. The image area selection is performed based on alignment of consecutive frames (images) for which a rough foreground mask, and moving object mask have been determined. Differences between these frames are utilized to determine a moving object contour, which also provides feedback through a delay to the moving object detection step. 1. An apparatus for performing autofocus selection for image capture , comprising:(a) a processor within an image capture device having a lens that can be focused to a selected depth in an autofocusing process; and(b) memory storing instructions executable by the processor; (i) detecting a rough estimate of foreground image area in a previous frame and a current frame captured by the image capture device;', '(ii) masking of said foreground image area as a foreground mask in both said previous frame and a current frame for global motion determination;', '(iii) combining said foreground mask for each of said previous frame and said current frame with a previous moving object mask in a linking step which performs a union of the masks;', '(iv) masking is performed separately, using the union of the masks, for the previous frame and the current frame;', '(v) aligning background images in the previous frame and the current frame;', '(vi) determining frame differences between the previous frame and the current frame;', '(vii) generating a moving object mask in response to moving object area detection;', '(viii) utilizing the moving object mask with frame delay on both the current frame and the previous frame;', '(ix) segmenting foreground and background based on generated moving object mask; and', '(x) performing ...

Подробнее
05-01-2017 дата публикации

Image stitching in a multi-camera array

Номер: US20170006220A1
Принадлежит: GoPro Inc

Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.

Подробнее
07-01-2016 дата публикации

Techniques for image encoding based on region of interest

Номер: US20160007026A1
Автор: CHEN Weian, Dong Jie
Принадлежит:

Various embodiments are generally directed to the use of a region of interest (ROI) determined during capture of an image to enhance compression of the image for storage and/or transmission. An apparatus includes an image sensor to capture an image as captured data; and logic to determine first boundaries of a region of interest within the image, compress a first portion of the captured data representing a first portion of the image within the region of interest with a first parameter, and compress a second portion of the captured data representing a second portion of the image outside the region of interest with a second parameter corresponding to the first parameter, the first and second parameters selected to differ to compress the second portion of the captured data to a greater degree than the first portion of the captured data. Other embodiments are described and claimed. 125.-. (canceled)26. An apparatus comprising:an image sensor to capture an image as captured data; andlogic to:determine first boundaries of a region of interest within the image;compress a first portion of the captured data that represents a first portion of the image within the region of interest with a first parameter; andcompress a second portion of the captured data that represents a second portion of the image outside the region of interest with a second parameter, the first and second parameters selected to compress the second portion of the captured data to a greater degree than the first portion of the captured data.27. The apparatus of claim 26 , the logic to:analyze a field of view of the image sensor to identify an object; anddetermine the first boundaries to encompass the object within the region of interest.28. The apparatus of claim 27 , comprising:a distance sensor to determine a distance to the object; andoptics interposed between the image sensor and the object, the logic to operate the optics to adjust a focus in response to the distance.29. The apparatus of claim 26 , ...

Подробнее
08-01-2015 дата публикации

VEHICLE VISION SYSTEM WITH DRIVER DETECTION

Номер: US20150009010A1
Автор: Biemer Michael
Принадлежит:

A vision system of a vehicle includes an interior camera disposed in an interior cabin of a vehicle and having a field of view interior of the vehicle that encompasses an area typically occupied by a head of a driver of the vehicle. An image processor is operable to process image data captured by the camera. The image processor is operable to determine the presence of a person's head in the field of view of the camera and to compare features of the person's face to features of an authorized driver. Responsive at least in part to the comparison of features, operation of the vehicle is allowed only to an authorized driver. The system may store features of one or more authorized driver and may allow operation of the vehicle only when the person occupying the driver seat is recognized or identified as an authorized driver.

Подробнее
20-01-2022 дата публикации

Image privacy protection method, apparatus and device

Номер: US20220019690A1

The implementations of the present specification provide an image privacy protection method, apparatus, and device. The method includes: performing privacy content recognition on an original image; in response to a privacy content being recognized, determining a local region including the privacy content from the original image; performing privacy protection processing on image data for the determined local region to generate data of a privacy-protected original image, the privacy protection processing including at least one of image scrambling processing or image obfuscation processing; and performing image compression processing on the data of the privacy-protected original image to generate data of a compressed image, and using the compressed image as image data to be transmitted or stored. The image privacy protection method, apparatus, and device can solve problems in the existing technologies that cause privacy-protected images to be vulnerable to brute force cracking and their original images to be difficult to restore.

Подробнее
20-01-2022 дата публикации

FACE AUTHENTICATION APPARATUS

Номер: US20220019763A1
Автор: KOCHI Taketo, Saito Kenji
Принадлежит: NEC Corporation

A face authentication apparatus includes a face image acquisition unit that acquires a face image of an authentication target, a collation unit that performs face authentication by calculating similarity between face information of the face image of the authentication target and reference face image of each registered user and comparing the similarity with a threshold, a prediction unit that predicts a change in the similarity on the basis of similarity history on authentication success, and a threshold change unit that changes the threshold on the basis of the prediction result. 1. A face authentication apparatus comprising:at least one memory configured to store instructions; and acquire a face image of an authentication target;', 'perform face authentication by calculating similarity between face information based on the face image of the authentication target and reference face information of each of a plurality of registered users;', 'compare the similarity with a threshold;', 'calculate an average similarity value on a success result of the face authentication of the plurality of registered users;', 'generate an approximation function representing a change in the similarity as a function of time;', 'predict a change in the similarity based on the generated approximation function;', 'change the threshold based on a result of the prediction., 'at least one processor configured to execute the instructions to2. The face authentication apparatus according to claim 1 , wherein 'change the threshold based on a prediction result of the change in the average similarity value.', 'the at least one processor configured to execute to3. The face authentication apparatus according to claim 1 , whereinthe threshold is a common threshold set to the plurality of the registered users.4. The face authentication apparatus according to claim 1 , whereinthe threshold is a threshold set to each of the plurality of registered users.5. The face authentication apparatus according to ...

Подробнее
20-01-2022 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20220019771A1
Автор: Matsunami Tomoaki
Принадлежит: FUJITSU LIMITED

An image processing device includes one or more memories; and one or more processors coupled to the one or more memories and the one or more processors configured to store, in the one or more memories, a plurality of time-series images that has captured an object when the object is instructed to change an orientation of a face, extract a face region from each of the plurality of time-series images, obtain characteristic of changes of pixel values of a plurality of pixels arranged in a certain direction in the face region, specify an action of the object based on a time-series change in the characteristic of changes obtained from each of the plurality of time-series images, and determine authenticity of the object based on the action of the object. 1. An image processing device comprising:one or more memories; and store, in the one or more memories, a plurality of time-series images that has captured an object when the object is instructed to change an orientation of a face,', 'extract a face region from each of the plurality of time-series images,', 'obtain characteristic of changes of pixel values of a plurality of pixels arranged in a certain direction in the face region,', 'specify an action of the object based on a time-series change in the characteristic of changes obtained from each of the plurality of time-series images, and', 'determine authenticity of the object based on the action of the object., 'one or more processors coupled to the one or more memories and the one or more processors configured to2. The image processing device according to claim 1 , whereinthe change of the orientation of the face instructed to the object represents an action of turning the face to the right or left,the certain direction is a right-left direction of the face,the characteristic of changes of pixel values of the plurality of pixels arranged in the certain direction represents right-to-left symmetry of the pixel values of the plurality of pixels, andthe action of the object ...

Подробнее
20-01-2022 дата публикации

IMAGE PROCESSING METHOD AND DEVICE, CLASSIFIER TRAINING METHOD, AND READABLE STORAGE MEDIUM

Номер: US20220019775A1
Автор: Chen Guannan
Принадлежит: BOE Technology Group Co., Ltd.

An image processing method, an image processing device, a training method and a computer-readable storage medium. The image processing method includes: extracting a characteristic vector in an image to be recognized; based on the characteristic vector of the image to be recognized, acquiring a predicted score value of the image to be recognized; and based on the predicted score value, determining a category of an image information of the image to be recognized; wherein the image to be recognized is a face image, and the image information is a facial expression. 1. An image processing method , comprising:extracting a characteristic vector in an image to be recognized;based on the characteristic vector of the image to be recognized, acquiring a predicted score value of the image to be recognized; andbased on the predicted score value, determining a category of an image information of the image to be recognized;wherein the image to be recognized is a face image, and the image information is a facial expression.2. The image processing method according to claim 1 , wherein the step of extracting the characteristic vector in the image to be recognized comprises:by using a Garbor filter, acquiring an image-feature response diagram of the image to be recognized; andextracting the characteristic vector of the image to be recognized from the image-feature response diagram;wherein the Garbor filter comprises a first quantity of dimensions and a second quantity of directions;the image-feature response diagram comprises features of the image information of the image to be recognized; andthe first quantity of dimensions are less than 4 dimensions.3. The image processing method according to claim 2 , wherein the method further comprises claim 2 , according to an accuracy rate of recognition of the image information by the Garbor filter with a third quantity of dimensions and a fourth quantity of directions claim 2 , selecting the first quantity of dimensions and the second ...

Подробнее
08-01-2015 дата публикации

Display apparatus and control method for adjusting the eyes of a photographed user

Номер: US20150009123A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A display apparatus is provided. The display apparatus includes a photographing unit configured to photograph the shape of a face; a detector is configured to detect a direction and angle of the face shape; a transformer is configured to mix the photographed face shape and a 3D face model and to transform the mixed face shape by using the detected direction and angle of the face shape; and an output interface is configured to output the transformed face shape.

Подробнее
08-01-2015 дата публикации

INFORMATION PROCESSING DEVICE, COMMUNICATION COUNTERPART DECISION METHOD AND STORAGE MEDIUM

Номер: US20150010214A1
Принадлежит:

An information processing device includes an imaging unit, a storage unit that stores face images of at least two persons, including an owner of the information processing device, in association with a communication device owned by each of the at least two persons, an identification unit that identifies, based on a first group of face images and a second group of face images, a person associated with a face image detected from an image including face images of a plurality of persons imaged by the imaging unit, the first group of face images includes the face image of each person detected from the image imaged by the imaging unit, and the second group of face images includes the faces stored in the storage unit, and a decision unit that decides a person as a receiver from the identified persons excluding the owner. 1. An information processing device comprising:an imaging unit;a storage unit configured to store a registered face image of an owner of the information processing device in association with the information processing device, and to store a face image of each of at least one person excluding the owner in association with the communication device owned by each of the at least one person;an identification unit configured to detect face images of a plurality of persons from an image imaged by the imaging unit and to identify, with reference to the registered face images stored in the storage unit, the person corresponding to the detected face image; anda decision unit configured to decide, if the owner is an identified person, a person owning a communication device that is to be a communication counterpart as a receiver from identified persons identified by the identification unit, wherein the owner is excluded from being determined a receiver.2. The information processing device according to claim 1 , further comprising a distance data acquisition unit configured to acquire a distance from the imaging unit to the plurality of persons included in the image ...

Подробнее
10-01-2019 дата публикации

WEARABLE DIGITAL DEVICE FOR PERSONAL HEALTH USE FOR SALIVA, URINE, AND BLOOD TESTING AND MOBILE WRIST WATCH POWERED BY USER BODY

Номер: US20190008463A1
Принадлежит:

Provided are a wearable personal digital device and related methods. The wearable personal digital device may comprise a processor, a display, biometric sensors, activity tracking sensors, a memory unit, a communication circuit, a housing, an input unit, a projector, a timepiece unit, a haptic touch control actuator, and a band. The processor may be operable to receive data from an external device, provide a notification to a user based on the data, receive a user input, and perform a command selected based on the user input. The communication circuit may be communicatively coupled to the processor and operable to connect to a wireless network and communicate with the external device. The housing may be adapted to enclose the components of the wearable personal digital device. The band may be adapted to attach to the housing and secure the wearable personal digital device on a user body. 1. An Artificial Intelligence (AI) wearable digital device for personal health use for saliva , urine , and blood testing , the device comprising: receive data from an external device;', 'based on the data, provide a notification to a user;', 'receive a user input;', 'perform a command, the command being selected based on the user input;', 'provide a natural language user interface to communicate with the user, the natural language user interface being operable to sense a user voice and provide a response in a natural language to the user;, 'a processor being operable toa near field communication (NFC) unit communicatively coupled to the processor;a display communicatively coupled to the processor, the display including a touchscreen, wherein the display includes a force sensor, wherein the force sensor is operable to sense a touch force applied by the user to the display and calculate coordinates of a touch by the user, and further operable to analyze the touch force, and based on the touch force, select a tap command or a press command based on a predetermined criteria;a memory ...

Подробнее
14-01-2021 дата публикации

VEHICLE DOOR UNLOCKING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM

Номер: US20210009080A1
Автор: HU Xin, Huang Cheng
Принадлежит:

The present disclosure relates to a vehicle door unlocking method and apparatus, a system, a vehicle, an electronic device and a storage medium. The method includes: obtaining a distance between a target object outside a vehicle and the vehicle by means of at least one distance sensor provided in the vehicle; in response to the distance satisfying a predetermined condition, waking up and controlling an image collection module provided in the vehicle to collect a first image of the target object; performing face recognition based on the first image; and in response to successful face recognition, sending a vehicle door unlocking instruction to at least one vehicle door lock of the vehicle. 1. A vehicle door unlocking method , comprising:obtaining a distance between a target object outside a vehicle and the vehicle by means of at least one distance sensor provided in the vehicle;in response to the distance satisfying a predetermined condition, waking up and controlling an image collection module provided in the vehicle to collect a first image of the target object;performing face recognition based on the first image; andin response to successful face recognition, sending a vehicle door unlocking instruction to at least one vehicle door lock of the vehicle.2. The method according to claim 1 , wherein the predetermined condition comprises at least one of the following:the distance is less than a predetermined distance threshold;a duration in which the distance is less than the predetermined distance threshold reaches a predetermined time threshold; orthe distance obtained in the duration indicates that the target object is proximate to the vehicle.3. The method according to claim 1 , wherein the at least one distance sensor comprises a Bluetooth distance sensor claim 1 ,obtaining the distance between the target object outside the vehicle and the vehicle by means of the at least one distance sensor provided in the vehicle comprises:establishing a Bluetooth pairing ...

Подробнее
27-01-2022 дата публикации

IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

Номер: US20220028031A1
Принадлежит:

An image processing method is provided. The method includes: encoding an input image based on an attention mechanism to obtain an encoding tensor set and an attention map set of the input image; obtaining an encoding result of the input image according to the encoding tensor set and the attention map set, the encoding result of the input image recording an identity feature of a human face in the input image; encoding an expression image to obtain an encoding result of the expression image, the encoding result of the expression image recording an expression feature of a human face in the expression image; and generating an output image according to the encoding result of the input image and the encoding result of the expression image, the output image having the identity feature of the input image and the expression feature of the expression image. 1. An image processing method , applied to a computer device , the method comprising:encoding an input image based on an attention mechanism to obtain an encoding tensor set and an attention map set of the input image, the encoding tensor set including n encoding tensors, the attention map set including n attention maps, and n being an integer greater than 1;obtaining an encoding result of the input image according to the encoding tensor set and the attention map set, the encoding result of the input image recording an identity feature of a human face in the input image;encoding an expression image to obtain an encoding result of the expression image, the encoding result of the expression image recording an expression feature of a human face in the expression image; andgenerating an output image according to the encoding result of the input image and the encoding result of the expression image, the output image having the identity feature of the input image and the expression feature of the expression image.2. The method according to claim 1 , wherein obtaining the encoding result of the input image comprises:multiplying, ...

Подробнее
27-01-2022 дата публикации

Method and system for encrypting and decrypting a facial segment in an image

Номер: US20220029789A1
Принадлежит: HCL Technologies Italy SpA

This disclosure relates to method and system for encrypting and decrypting a facial segment in an image with a unique server key. The method includes receiving an image from one of a plurality of users. The image includes a plurality of facial segments. The method further includes, for each facial segment from the plurality of facial segments, identifying a unique user associated with the facial segment using a facial recognition algorithm, encrypting the facial segment with a unique server key, generating a protection frame, unlockable with the unique server key, to cover the facial segment, and decrypting the facial segment while rendering the image for at least one of the plurality of users upon receiving the unique server key from the at least one of the plurality of users.

Подробнее
10-01-2019 дата публикации

FIREARM SAFETY SYSTEM

Номер: US20190011206A1
Принадлежит:

A gun lock device, which may be configured to be disposed adjacent the trigger of a gun to alternatively prevent or enable firing, includes a data receiver, a data memory and a logic device for determining whether security data received by the receiver is the same, or substantially the same, as security data stored in the data memory. One or more separate smartphones are provided to transmit the security data and a gnu lock/unlock signal to the gun lock device. The smartphone and the gun lock device operate together to automatically to the gun when the gun is aimed in the direction of the authorized gun user (e.g. the gun owner) or the direction of any of the gun user's friends who have accepted the user's friend request from the app “Find Friends.”. 1. A firearm , comprising ,a first camera for capturing a first image of a target;a second camera for capturing a second image of a shooter;a non-transitory memory for storing one or more images including the first image and the second image;the non-transitory memory for storing instructions; and process the first image and the second image to determine whether the first image does not match with any of the one or more images, and whether the second image matches with any of the one or more images;', 'if the determination is valid, generate a signal to unlock the firearm., 'one or more processors in communication with the non-transitory memory, wherein the one or more processors execute the instructions to2. The firearm as recited in claim 1 , wherein at least one of the plurality of cameras is a digital video camera.3. The firearm as recited in claim 1 , wherein the one or more processors execute the instructions to generate a first value or a second value based on the determination claim 1 , the first value indicating that the firearm is to be locked and the second value indicating that the firearm is to be unlocked.4. The firearm as recited in claim 3 , wherein the one or more processors execute the instructions to ...

Подробнее
14-01-2016 дата публикации

PERSON SEARCH METHOD AND DEVICE FOR SEARCHING PERSON STAYING ON PLATFORM

Номер: US20160012280A1
Автор: AKIMOTO Yohei, ITO Wataru
Принадлежит:

Provided is a suspicious person detection method. First, a normal similar facial image search is carried out. Next, facial images, which are detected automatically from the input images and specified manually, are specified to be determined. Next, similar faces are searched for limited time on a time axis on the database. Next, the number of search results that distance between the features is lower than predetermined value is calculated and it is determined that the number of appearances is large and a possibility of a prowling person is high if the number of cases is large, and otherwise a possibility of prowling person is low. Last, a similarity between a facial image of a pre-registered residents and a facial image of a person whose number is large is calculated, and it is re-determined that the person is residents if the similarity is high, regardless of the determination. 1. A person search method comprising the steps of:creating a database by detecting facial images in input images, extracting features from the facial images, and registering the features in the database together with time information;specifying facial images which are detected automatically or specified manually from the input images as facial images to be determined;searching the database created in the creating step for similar faces for limited time on a time axis;calculating the number of cases with higher similarity than a predetermined value among search results of the searching step, and determining that the number of appearances is large and a possibility of a prowling person is high if the number of cases is large, and that the number of appearances is small and a possibility of prowling person is low if the number of cases is small; andcalculating similarity between a facial image of a pre-registered non-suspicious person and a facial image of a person whose appearance number is large, and re-determining that the person is not a suspicious person if the similarity is high, ...

Подробнее
14-01-2016 дата публикации

ROOM INFORMATION INFERRING APPARATUS, ROOM INFORMATION INFERRING METHOD, AND AIR CONDITIONING APPARATUS

Номер: US20160012309A1
Принадлежит: Omron Corporation

A room information inferring apparatus that infers information regarding a room has an imaging unit that captures an image of a room that is to be subjected to inferring, a person detector that detects a person in an image captured by the imaging unit, and acquires a position of the person in the room, a presence map generator that generates a presence map indicating a distribution of detection points corresponding to persons detected in a plurality of images captured at different times, and an inferring unit that infers information regarding the room based on the presence map. 1. A room information inferring apparatus that infers information regarding a room , comprising:an imaging unit that captures an image of a room that is to be subjected to inferring;a person detector that detects a person in an image captured by the imaging unit, and acquires a position of the person in the room;a presence map generator that generates a presence map indicating a distribution of detection points corresponding to persons detected in a plurality of images captured at different times; andan inferring unit that infers information regarding the room based on the presence map.2. The room information inferring apparatus according to claim 1 , wherein the person detector detects a face claim 1 , a head claim 1 , or an upper body of the person in the image claim 1 , and acquires the position of the person in the room based on a position and a size of the face claim 1 , the head claim 1 , or the upper body in the image.3. The room information inferring apparatus according to claim 1 , wherein the inferring unit infers a shape of the room based on the presence map.4. The room information inferring apparatus according to claim 3 , wherein the inferring unit infers that a polygon circumscribed around the distribution of detection points in the presence map is the shape of the room.5. The room information inferring apparatus according to claim 4 , wherein the inferring unit infers the shape ...

Подробнее
03-02-2022 дата публикации

STROKE DETECTION AND MITIGATION

Номер: US20220031162A1
Принадлежит:

A method and system for detecting a possible stroke in a person through the analysis of voice data and image data regarding the gate of the user, facial features and routines, and corroborating any anomalies in one set of data against anomalies in another set of data for a related time frame. 1. A system for detecting a stroke in a user and mitigating the impact of a stroke , comprisinga video camera for monitoring any two or more of a user's gate, a user's facial features, and a user's routines (collectively referred to herein as image parameters),a microphone for monitoring a user's voice parameters, wherein the video camera and microphone are collectively referred to as sensors,a processor, anda memory configured with machine-readable code defining an algorithm for analyzing image data from the video camera and voice data from the microphone to identify anomalies in any of the image parameters or voice parameters indicative of a possible stroke, and for validating any anomaly in the data from the video camera or microphone by comparing said anomaly with any anomaly detected in any of the other parameters, to define a stroke event.2. The system of claim 1 , wherein the anomalies in the user's gate claim 1 , include one or more of: tumbling claim 1 , instability claim 1 , wobbling claim 1 , and problems with coordination.3. The system of claim 1 , wherein anomalies in the user's facial features include facial muscle weakness or partial paralysis/drooping of parts of the user's face.4. The system of claim 1 , wherein anomalies in the voice data of the user include one or more of: difficulty speaking claim 1 , slurred speech claim 1 , speech loss claim 1 , and the absence of a response or non-sensical response when prompted via the speaker.5. The system of claim 1 , further comprising a communications system for notifying one or more predefined persons in the event of a stroke event.6. The system of claim 1 , further comprising a storage medium for storing speech ...

Подробнее
11-01-2018 дата публикации

System and method for automatic driver identification

Номер: US20180012092A1
Принадлежит: Nauto Global Ltd

A method for driver identification including recording a first image of a vehicle driver; extracting a set of values for a set of facial features of the vehicle driver from the first image; determining a filtering parameter; selecting a cluster of driver identifiers from a set of clusters, based on the filtering parameter; computing a probability that the set of values is associated with each driver identifier of the cluster; determining, at the vehicle sensor system, driving characterization data for the driving session; and in response to the computed probability exceeding a first threshold probability: determining that the new set of values corresponds to one driver identifier within the selected cluster, and associating the driving characterization data with the one driver identifier.

Подробнее
11-01-2018 дата публикации

REGION SELECTION FOR IMAGE MATCH

Номер: US20180012102A1
Принадлежит:

The accuracy of an image matching process can be improved by determining relevant swatch regions of the images, where those regions contain representative patterns of the items of interest represented in those images. Various processes examine a set of visual cues to determine at least one candidate object region, and then collate these regions to determine one or more representative swatch images. For apparel items, this can include locating regions such as an upper body region, torso region, clothing region, foreground region, and the like. Processes such as regression analysis or probability mapping can be used on the collated region data (along with confidence and/or probability values) to determine the appropriate swatch regions. 1. A system , comprising:at least one processor; and receive an image containing a representation of an object;', 'determine a set of candidate swatch regions based at least in part on a selection of the area of the image;', 'for each swatch region in the set of candidate swatch regions, identify a subset of pixels of the image based at least in part on the subset of pixels having at least a minimum probability of similarity to the representation of the object in the image;', 'selecting at least one swatch region from the set of candidate swatch regions based at least in part on the minimum probability of similarity to the representation of the object;', 'determine at least two or more sub-regions within the at least one swatch region;', 'collating data for the at least one swatch region to determine a sub-region representative of at least one visual aspect of the object; and', 'generating a swatch image using pixel values from within the sub-region, the swatch image used to perform an similarity matching process against a set of candidate images., 'memory storing instructions that, when executed by the at least one processor, cause the system to2. The system of claim 1 , wherein the at least two or more sub-regions within the at least ...

Подробнее
14-01-2021 дата публикации

TWO-STAGE PERSON SEARCHING METHOD COMBINING FACE AND APPEARANCE FEATURES

Номер: US20210012094A1
Автор: LI Liangqi, YANG Hua
Принадлежит: SHANGHAI JIAO TONG UNIVERSITY

A two-stage person searching method combining face and appearance features, comprises: detecting a face of a person utilizing a face detector, and outputting a face representation vector based on a face comparing model; ranking person sets to be matched according to an Euclidean distance to acquire a face ranking result; selecting a plurality of samples as multi-matching targets at the next stage according to the ranking result; using the selected multi-matching targets of different persons at the next stage in the same data set as mutual negative samples, so as to compress the sample space matched at the next stage; and finally re-recognizing multi-target persons, and ranking the image sets to be matched according to an average distance or similarity with multiple targets to output a final result. 1. A two-stage person searching method combining face and appearance features , comprising:{'sub': 'C', 'acquiring an image I(x, y) containing a target person;'}{'sub': 'G', 'acquiring a panoramic image to be identified and person coordinate information in the panoramic image, and determining an image I(X, y) containing a candidate person;'}{'sub': C', 'G, 'calculating a score of face similarity between the target person and the candidate person according to the image I(x y) and the image I(x, y);'}{'sub': 'Q', 'ranking according to scores of face similarity, if the score is larger than or equal to a preset threshold, using corresponding image containing the candidate person as a target image I(x, y) for person re-identification; and if the score is less than the preset threshold, using the corresponding image containing the candidate person as the panoramic image for person re-identification;'}{'sub': G', 'C, 'filtering each panoramic image for person re-identification corresponding to the target person to obtain a processed candidate person image I′(x, y) if the image I(x, y) contains two or more target persons;'}{'sub': Q', 'G', 'Q', 'G, 'calculating an initial ...

Подробнее
03-02-2022 дата публикации

METHOD AND SYSTEM FOR INCORPORATING PHYSIOLOGICAL SELF-REGULATION CHALLENGE INTO GEOSPATIAL SCENARIO GAMES AND/OR SIMULATIONS

Номер: US20220032174A1
Принадлежит:

A method of providing physiological self-regulation challenges prior to participating in a series of activities or exercises at a series of predefined locations includes determining a physiological goal associated with each location. A sensing device measures a physiological state of a user, and a mobile communication device communicates to a user whether or not the user has achieved the physiological goal for the challenge. The level of difficulty of the physiological goal may be reduced if the user does not meet the goal. The physiological goal may comprise a brain state that is conducive to learning, and the sensing device may be configured to measure brain state values representing cognitive engagement. Upon achievement of each physiological goal, the participant is provided with a reward such as information concerning the current predefined location and/or information concerning the next predefined location. 1. A non-transitory computer-readable medium comprising computer-executable instructions that when executed by a processor , cause the processor to at least:provide a series of physiological self-regulation challenges for a participant, with each of the physiological self-regulation challenges separate but related to a respective one of a series of predefined activities for the participant, with the predefined activities each at a respective predefined spaced-apart location and related to a respective one of the physiological self-regulation skills;receive respective physiological self-regulation goals for the self-regulation challenges, wherein the physiological self-regulation goals comprise achieving by self-regulation a respective target brain state conducive to learning or problem solving, and wherein the target brain states are selected for enhanced performing of the respective predefined activities;receive a current location of the participant is within a range of one of the predefined spaced apart locations, determine the predefined activity ...

Подробнее
09-01-2020 дата публикации

MODEL TRAINING METHOD, APPARATUS, AND DEVICE, AND DATA SIMILARITY DETERMINING METHOD, APPARATUS, AND DEVICE

Номер: US20200012969A1
Автор: Jiang Nan, ZHAO Hongwei
Принадлежит:

A model training method includes: acquiring a plurality of user data pairs, wherein data fields of two sets of user data in each user data pair have an identical part; acquiring a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair; determining, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model; and training the classification model based on the sample data to obtain a similarity classification model. 1. A model training method , comprising:acquiring a plurality of user data pairs, wherein data fields of two sets of user data in each user data pair have an identical part;acquiring a user similarity corresponding to each user data pair, wherein the user similarity is a similarity between users corresponding to the two sets of user data in each user data pair;determining, according to the user similarity corresponding to each user data pair and the plurality of user data pairs, sample data for training a preset classification model; andtraining the classification model based on the sample data to obtain a similarity classification model.2. The method according to claim 1 , wherein the acquiring the user similarity corresponding to each user data pair comprises:acquiring biological features of users corresponding to a first user data pair, wherein the first user data pair is any user data pair in the plurality of user data pairs; anddetermining a user similarity corresponding to the first user data pair according to the biological features of the users corresponding to the first user data pair.3. The method according to claim 2 , wherein the biological features comprise a facial image feature; acquiring facial images of the users corresponding to the first user data pair; and', 'performing feature extraction on the facial images to ...

Подробнее
03-02-2022 дата публикации

PASSENGER HEALTH SCREENING AND MONITORING

Номер: US20220032956A1
Принадлежит:

Among other things, techniques are described for screening and monitoring the health of a vehicle user including receiving sensor data produced by a sensor at the vehicle, processing the sensor data to determine at least one health condition of the user of the vehicle, and in response to determining the at least one health condition, executing a vehicle function selected from a plurality of vehicle functions based on the at least one health condition. 1. A vehicle , comprising:a sensor configured to produce sensor data related to a user of the vehicle;a computer-readable media storing computer-executable instructions; and receiving the sensor data produced by the sensor;', 'processing the sensor data to determine at least one health condition of the user; and', 'in response to determining the at least one health condition, executing a vehicle function selected from a plurality of vehicle functions based on the at least one health condition., 'a processor communicatively coupled to the sensor and the computer-readable media, the processor configured to execute the computer executable instructions to perform operations including2. The vehicle of claim 1 , the operations including:processing the sensor data to identify data indicative of a cough by the user; andanalyzing the data indicative of the cough to determine the at least one health condition of the user.3. The vehicle of claim 1 , the operations including:processing the sensor data to identify data indicative of motion by the user; andanalyzing the data indicative of the motion to determine the at least one health condition of the user.4. The vehicle of claim 1 , the operations including:processing the sensor data to identify data indicative of a pathogen within the vehicle; andanalyzing the data indicative of the pathogen to determine the at least one health condition of the user.5. The vehicle of claim 1 , the operations including:processing the sensor data to determine a body temperature of the user; ...

Подробнее
11-01-2018 дата публикации

Modification of post-viewing parameters for digital images using image region or feature information

Номер: US20180013950A1
Принадлежит: Fotonation Ireland Ltd

A method of generating one or more new digital images using an original digitally-acquired image including a selected image feature includes identifying within a digital image acquisition device one or more groups of pixels that correspond to the selected image feature based on information from one or more preview images. A portion of the original image is selected that includes the one or more groups of pixels. The technique includes automatically generating values of pixels of one or more new images based on the selected portion in a manner which includes the selected image feature within the one or more new images.

Подробнее
17-01-2019 дата публикации

Systems and Methods for Virtual Facial Makeup Removal and Simulation, Fast Facial Detection and Landmark Tracking, Reduction in Input Video Lag and Shaking, and a Method for Recommending Makeup

Номер: US20190014884A1
Принадлежит:

The present disclosure provides systems and methods for virtual facial makeup simulation through virtual makeup removal and virtual makeup add-ons, virtual end effects and simulated textures. In one aspect, the present disclosure provides a method for virtually removing facial makeup, the method comprising providing a facial image of a user with makeups being applied thereto, locating facial landmarks from the facial image of the user in one or more regions, decomposing some regions into first channels which are fed to histogram matching to obtain a first image without makeup in that region and transferring other regions into color channels which are fed into histogram matching under different lighting conditions to obtain a second image without makeup in that region, and combining the images to form a resultant image with makeups removed in the facial regions. The disclosure also provides systems and methods for virtually generating output effects on an input image having a face, for creating dynamic texturing to a lip region of a facial image, for a virtual eye makeup add-on that may include multiple layers, a makeup recommendation system based on a trained neural network model, a method for providing a virtual makeup tutorial, a method for fast facial detection and landmark tracking which may also reduce lag associated with fast movement and to reduce shaking from lack of movement, a method of adjusting brightness and of calibrating a color and a method for advanced landmark location and feature detection using a Gaussian mixture model. 1. A method for virtually removing facial makeup , comprising:providing a facial image of a user with makeup applied thereto;locating facial landmarks from the facial image of the user, the facial landmarks including at least a first region and a second region different from the first region;decomposing the first region of the facial image into first channels;feeding the first channels of the first region into histogram matching ...

Подробнее
16-01-2020 дата публикации

Systems and Methods for Virtual Facial Makeup Removal and Simulation, Fast Facial Detection and Landmark Tracking, Reduction in Input Video Lag and Shaking, and a Method for Recommending Makeup

Номер: US20200015575A1
Принадлежит: Shiseido Americas Corporation

The present disclosure provides systems and methods for virtual facial makeup simulation through virtual makeup removal and virtual makeup add-ons, virtual end effects and simulated textures. In one aspect, the present disclosure provides a method for virtually removing facial makeup, the method comprising providing a facial image of a user with makeups being applied thereto, locating facial landmarks from the facial image of the user in one or more regions, decomposing some regions into first channels which are fed to histogram matching to obtain a first image without makeup in that region and transferring other regions into color channels which are fed into histogram matching under different lighting conditions to obtain a second image without makeup in that region, and combining the images to form a resultant image with makeups removed in the facial regions. The disclosure also provides systems and methods for virtually generating output effects on an input image having a face, for creating dynamic texturing to a lip region of a facial image, for a virtual eye makeup add-on that may include multiple layers, a makeup recommendation system based on a trained neural network model, a method for providing a virtual makeup tutorial, a method for fast facial detection and landmark tracking which may also reduce lag associated with fast movement and to reduce shaking from lack of movement, a method of adjusting brightness and of calibrating a color and a method for advanced landmark location and feature detection using a Gaussian mixture model. 166-. (canceled)67. A method of virtually providing an eye-makeup add-on effect to a facial image , comprising:(a) creating a template for at least one eye makeup feature of an eye, manually annotating landmark points on the template related to the eye makeup feature, and saving locations of the landmark points as a text file;(b) extracting landmarks of an eye region of a facial image using a landmarks detector for the image frame ...

Подробнее
21-01-2021 дата публикации

SYSTEMS AND METHODS FOR RECOMMENDATION OF MAKEUP EFFECTS BASED ON MAKEUP TRENDS AND FACIAL ANALYSIS

Номер: US20210015242A1
Автор: JHOU Wei-Cih
Принадлежит:

A computing device generates a collection of digital images depicting makeup effects representing makeup trends, analyzes the collection of digital images, and extracts target attributes. The computing device constructs a database of makeup recommendation entries comprising the collection of digital images and extracted attributes and receives a query request from a user comprising an image of the user's face. The computing device queries the database and obtains a first number of makeup recommendations. The computing device merges makeup recommendations among the first number of makeup recommendations to generate a second number of makeup recommendations and displays at least a portion of the second number of makeup recommendations and receiving a selection from the user. The computing device performs virtual application of a makeup effect corresponding to the selection. 1. A method implemented in a computing device , comprising:generating a collection of digital images depicting makeup effects representing makeup trends;analyzing the collection of digital images and extracting target attributes;constructing a database of makeup recommendation entries comprising the collection of digital images and extracted attributes;receiving a query request from a user comprising an image of the user's face;querying the database and obtaining a first number of makeup recommendations;merging makeup recommendations among the first number of makeup recommendations to generate a second number of makeup recommendations;displaying at least a portion of the second number of makeup recommendations and receiving a selection from the user; andperforming virtual application of a makeup effect corresponding to the selection.2. The method of claim 1 , further comprising filtering the second number of makeup recommendations to generate a third number of makeup recommendations claim 1 , wherein the at least the portion of the second number of makeup recommendations displayed comprises the ...

Подробнее
03-02-2022 дата публикации

Device, system, and method for performance monitoring and feedback for facial recognition systems

Номер: US20220036047A1
Принадлежит: Motorola Solutions Inc

Disclosed is a process for performance monitoring and feedback for facial recognition systems. A first image for image matching from a camera capture device at a first location is received for purposes of image matching. A highest match confidence score of the first image to a particular stored enrollment image is determined. One or more image or user characteristics associated with the first image or first user is identified. The identified image or user characteristics and highest match confidence score are added to a facial recognition monitoring and feedback model. Subsequently, a particular one of the stored image or user characteristics consistently associated with a below-threshold highest match confidence score is identified, and a notification is displayed or transmitted including an indication of an identified facial recognition low match pattern and identifying the particular one of the stored image characteristics or user characteristics.

Подробнее
03-02-2022 дата публикации

Method, device, and computer program product for model updating

Номер: US20220036129A1
Принадлежит: EMC IP Holding Co LLC

The present disclosure relates to a method, a device, and a computer program product for model updating. The method includes: acquiring a first image set and first annotation information, wherein the first annotation information indicates whether a corresponding image in the first image set includes a target object; updating a first version of an object verification model using the first image set and the first annotation information to obtain a second version, wherein the first version of the object verification model has been deployed to determine whether an input image includes the target object; determining the accuracy of the second version of the object verification model; and updating, if it is determined that the accuracy is lower than a preset accuracy threshold, the second version of the object verification model using a second image set and second annotation information to obtain a third version of the object verification model.

Подробнее
03-02-2022 дата публикации

CUSTOMER INFORMATION REGISTRATION APPARATUS

Номер: US20220036359A1
Автор: IGARASHI Makoto
Принадлежит: NEC Corporation

A customer information registration apparatus includes: a matching unit configured to match face data of a customer based on image data acquired by a camera in a shop against face data stored in a storage unit; a storing unit configured to store face data which is not stored in the storage unit into the storage unit in a case where the matching by the matching unit fails; a behavior information acquisition unit configured to acquire behavior information according to a behavior in the shop of the customer; a condition determination unit configured to determine whether or not to delete the face data stored in the storage unit based on the behavior information acquired by the behavior information acquisition unit; and a deletion unit configured to delete the face data stored in the storage unit based on a result of the determination by the condition determination unit are provided. 1. A customer information registration apparatus comprising:at least one memory configured to store instructions; andat least one hardware processor configured to execute the instructions to:match face data of a customer based on image data acquired by a camera in a shop against face data stored in a storage unit;store face data which is not stored in the storage unit into the storage unit in a case where the matching fails;acquire behavior information according to a behavior in the shop of the customer;determine whether or not to delete the face data stored in the storage unit based on the acquired behavior information; anddelete the face data stored in the storage unit based on a result of the determination.2. The customer information registration apparatus according to claim 1 , wherein the at least one hardware processor is configured to execute the instructions to determine whether or not to delete the face data stored in the storage unit based on whether or not it is determined from the behavior information that the customer intends to purchase a product or the customer has considered ...

Подробнее
21-01-2016 дата публикации

Computer-Implemented System And Method For Personality Analysis Based On Social Network Images

Номер: US20160019411A1
Принадлежит:

A computer-implemented system and method for personality analysis based on social network images are provided. A plurality of images posted to one or more social networking sites by a member of these sites are accessed. An analysis of the images is performed. Personality of the member is evaluated based on the analysis of the images.

Подробнее
21-01-2016 дата публикации

AUTOMATED OBSCURITY FOR PERVASIVE IMAGING

Номер: US20160019415A1
Принадлежит:

Methods for obfuscating an image of a subject in a captured media are disclosed. For example, a method receives a communication from an endpoint device of a subject indicating that the image of the subject is to be obfuscated in a captured media. The communication may include a feature set associated with the subject, where the feature set contains facial features of the subject and motion information associated with the subject. The method then detects the image of the subject in the captured media. For example, the image of the subject is detected by matching the facial features of the subject to the image of the subject in the captured media and matching the motion information associated with the subject to a trajectory of the image of the subject in the captured media. The method then obfuscates the image of the subject in the captured media. 1. A method for obfuscating an image of a subject in a captured media , comprising:receiving, by a processor, a communication from an endpoint device of the subject indicating that the image of the subject is to be obfuscated in the captured media, wherein the communication includes a feature set associated with the subject, wherein the feature set comprises facial features of the subject and motion information associated with the subject; matching the facial features of the subject to the image of the subject in the captured media; and', 'matching the motion information associated with the subject to a trajectory of the image of the subject in the captured media; and, 'detecting, by the processor, the image of the subject in the captured media, wherein the image of the subject is detected byobfuscating, by the processor, the image of the subject in the captured media when the image of the subject is detected in the captured media.2. The method of claim 1 , wherein the receiving comprises receiving the feature set as a set of quantized vectors.3. The method of claim 2 , wherein the receiving comprises receiving the motion ...

Подробнее
21-01-2016 дата публикации

IMAGE SEARCH APPARATUS, METHOD OF CONTROLLING OPERATION OF SAME, AND IMAGE SEARCH SERVER

Номер: US20160019416A1
Автор: NOGUCHI Yukinori
Принадлежит:

An image search apparatus includes a display control device, a feature quantity calculation device, a scoring device for scoring the image based upon the values of the feature quantities calculated by the feature quantity calculation device, a first scoring control device, responsive to application of a first move command which moves an image being displayed in the candidate area to a search result area, for controlling the scoring device to raise the value of feature quantities, which correspond to the feature quantities of the image for which the first move command has been applied, and score the multiplicity of images based upon the raised values of the feature quantities, and an image placement decision device for deciding image placement in such a manner that a predetermined number of images having high scores obtained are displayed in the search result area, and other images are displayed in the candidate area.

Подробнее
19-01-2017 дата публикации

RE-WANDERING ALARM SYSTEM AND METHOD

Номер: US20170018091A1
Принадлежит: HANWHA TECHWIN CO., LTD.

Provided are a re-wandering detecting device and method. The method includes: detecting an object and positional information about the object from an input image; determining whether the object wanders based on the positional information about the object; in response to determining that the object wanders, determining whether the object re-wanders by determining whether a database stores information about an object identical to the object detected from the input image; and providing information about wandering of the object according to whether the object re-wanders. 1. A method of detecting re-wandering , the method comprising:detecting, by at least one processor, an object and positional information about the object from an input image;determining, by the processor, whether the object wanders based on the positional information about the object;in response to determining that the object wanders, determining whether the object re-wanders by determining whether a database stores information about an object identical to the object detected from the input image; andproviding, by the processor, information about wandering of the object according to whether the object re-wanders.2. The method of claim 1 , wherein the input image is a motion picture comprising a plurality of frames claim 1 ,wherein the detecting the object comprises detecting the object and positional information about the object from each of the plurality of frames, andwherein, in the determining whether the object wanders, the object is determined as a wandering object if the object satisfies a preset wandering condition.3. The method of claim 2 , wherein the wandering condition is that a number of consecutive frames from which the object is detected is equal to or greater than a preset critical number.4. The method of claim 2 , wherein the wandering condition is that at least one closed curve is formed by a path of the object calculated using the positional information about the object.5. The method ...

Подробнее
03-02-2022 дата публикации

ESTIMATION DEVICE, ESTIMATION METHOD, AND STORAGE MEDIUM

Номер: US20220036581A1
Автор: MORISHITA Yusuke
Принадлежит: NEC Corporation

An estimation device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: generate a plurality of extraction regions by adding a perturbation to an extraction region of a partial image determined based on positions of feature points extracted from a face image; estimate a plurality of directions of at least one of a face and a line of sight and a reliability of each of the plurality of directions based on a plurality of partial images in the plurality of extraction regions of the face image; and calculate an integrated direction obtained by integrating the plurality of directions based on the estimated reliability. 1. An estimation device comprising:at least one memory storing a set of instructions; andat least one processor configured to execute the set of instructions to:generate a plurality of extraction regions by adding a perturbation to an extraction region of a partial image determined based on positions of feature points extracted from a face image;estimate a plurality of directions of at least one of a face and a line of sight and a reliability of each of the plurality of directions based on a plurality of partial images in the plurality of extraction regions of the face image; andcalculate an integrated direction obtained by integrating the plurality of directions based on the estimated reliability.2. The estimation device according to claim 1 , whereinthe at least one processor is configured to execute the set of instructions todetermine, based on the positions of the feature points, the perturbation added to the extraction region determined based on the positions of the feature points.3. The estimation device according to claim 1 , whereinthe at least one processor is configured to execute the set of instructions toextract a face region that is a region of the face from the face image;extract the feature points from the ...

Подробнее
18-01-2018 дата публикации

METHOD, TERMINAL, AND STORAGE MEDIUM FOR TRACKING FACIAL CRITICAL AREA

Номер: US20180018503A1
Автор: WANG Chengjie
Принадлежит:

Method, terminal, and storage medium for tracking facial critical area are provided. The method includes accessing a frame of image in a video file; obtaining coordinate frame data of a facial part in the image; determining initial coordinate frame data of a critical area in the facial part according to the coordinate frame data of the facial part; obtaining coordinate frame data of the critical area according to the initial coordinate frame data of the critical area in the facial part; accessing an adjacent next frame of image in the video file; obtaining initial coordinate frame data of the critical area in the facial part for the adjacent next frame of image by using the coordinate frame data of the critical area in the frame; and obtaining coordinate frame data of the critical area for the adjacent next frame of image according to the initial coordinate frame data thereof. 1. A facial critical area tracking method , comprising:accessing a frame of image in a video file;obtaining coordinate frame data of a facial part in the image by detecting a position of the facial part in the frame of the image;determining initial coordinate frame data of a critical area in the facial part according to the coordinate frame data of the facial part;obtaining coordinate frame data of the critical area according to the initial coordinate frame data of the critical area in the facial part;accessing an adjacent next frame of image in the video file;obtaining initial coordinate frame data of the critical area in the facial part for the adjacent next frame of image by using the coordinate frame data of the critical area in the frame; andobtaining coordinate frame data of the critical area for the adjacent next frame of image according to the initial coordinate frame data of the critical area in the adjacent next frame of image.2. The method according to claim 1 , further including:obtaining coordinate frame data of each of a plurality of frames of image, wherein the plurality of ...

Подробнее
18-01-2018 дата публикации

METHOD FOR DETECTING SKIN REGION AND APPARATUS FOR DETECTING SKIN REGION

Номер: US20180018505A1
Автор: TAN Guofu
Принадлежит:

A method for identifying skin region in an image includes: obtaining a target image; and for each of a plurality of pixels in the target image, calculating a probability that the pixel corresponds to skin captured in the image, wherein calculating the probability includes: calculating a first probability of said each pixel in the target image in a first color space the first probability being a probability that the pixel is skin in the first color space; calculating a second probability of said each pixel in the target image in a second color space, the second probability being a probability that the pixel is skin in the second color space; and determining a combined probability that said each pixel in the target image is skin, the combined probability that the pixel is skin being an arithmetic mean value of the first probability and the second probability of the pixel. 1. A method for detecting a skin region in an image , comprising:obtaining a target image; and calculating a first probability of said each pixel in the target image in a first color space the first probability being a probability that the pixel is skin in the first color space;', 'calculating a second probability of said each pixel in the target image in a second color space that is distinct from the first color space, the second probability being a probability that the pixel is skin in the second color space; and', 'determining a combined probability that said each pixel in the target image is skin, the combined probability that the pixel is skin being an arithmetic mean value of the first probability and the second probability of the pixel., 'for each of a plurality of pixels in the target image, calculating a probability that the pixel corresponds to skin captured in the image, wherein calculating the probability includes2. The method of claim 1 , wherein the first color space is an RGB color space and the second color space is an YUV color space.3. The method of claim 2 , wherein calculating the ...

Подробнее
18-01-2018 дата публикации

IMAGE SEARCH SERVER, IMAGE SEARCH APPARATUS, AND METHOD OF CONTROLLING OPERATION OF SAME

Номер: US20180018506A1
Автор: NOGUCHI Yukinori
Принадлежит:

An image search server constituting an image search system having a client computer and the image search server includes at least one hardware processor configured to calculate, with regard to each image of a multiplicity of images, the values of feature quantities representing characteristics of the image, score the image based upon the calculated values of the feature quantities, and responsive to application of a first move command which moves one image among a plurality of images displayed in a candidate area, which has been formed on a display screen of said client computer, to a search result area, raise the value of feature quantities, which correspond to the feature quantities of the one image for which the first move command has been applied, and score said multiplicity of images based upon the raised values of the feature quantities. 1. An image search server constituting an image search system having a client computer and the image search server , comprising:at least one hardware processor configured tocalculate, with regard to each image of a multiplicity of images, the values of feature quantities representing characteristics of the image,score the image based upon the calculated values of the feature quantities, andresponsive to application of a first move command which moves one image among a plurality of images displayed in a candidate area, which has been formed on a display screen of said client computer, to a search result area, raise the value of feature quantities, which correspond to the feature quantities of the one image for which the first move command has been applied, and score said multiplicity of images based upon the raised values of the feature quantities.2. The image search server according to claim 1 ,responsive to application of a second move command which moves one image among a plurality of images being displayed in the candidate area, which has been formed on the display screen of said client computer, or one image among a ...

Подробнее
18-01-2018 дата публикации

METHOD AND APPARATUS FOR GENERATING PERSONALIZED 3D FACE MODEL

Номер: US20180018819A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method of generating a three-dimensional (3D) face model includes extracting feature points of a face from input images comprising a first face image and a second face image; deforming a generic 3D face model to a personalized 3D face model based on the feature points; projecting the personalized 3D face model to each of the first face image and the second face image; and refining the personalized 3D face model based on a difference in texture patterns between the first face image to which the personalized 3D face model is projected and the second face image to which the personalized 3D face model is projected. 1. A method of generating a three-dimensional (3D) face model , the method comprising:projecting a personalized 3D face model to each of images associated with a user face; andrefining the personalized 3D face model based on the images to which the personalized 3D face model is projected, extracting a correspondence point between the images to which the personalized 3D face model is projected; and', 'refining a shape of the personalized 3D face model such that a similarity between texture patterns of the images to which the personalized 3D face model is projected increases in a peripheral area of the correspondence point., 'wherein the refining comprises2. The method of claim 1 , wherein the refining comprises:generating a comparison between the texture patterns of the images to which the personalized 3D face model is projected in the peripheral area of the correspondence point; andrefining the shape of the personalized 3D face model based on the comparison.3. The method of claim 2 , wherein the refining includes claim 2 , iteratively claim 2 ,determining if a first condition is satisfied, andrefining the shape of the personalized 3D face model based on the comparison in texture patterns, until the first condition is satisfied.4. The method of claim 1 , wherein the refining comprises:refining the shape of the personalized 3D face model such that a ...

Подробнее
18-01-2018 дата публикации

NEURAL NETWORK FOR RECOGNITION OF SIGNALS IN MULTIPLE SENSORY DOMAINS

Номер: US20180018970A1
Принадлежит:

Apparatus and method for training a neural network for signal recognition in multiple sensory domains, such as audio and video domains, are provided. For example, an identity of a speaker in a video clip may be identified based on audio and video features extracted from the video clip and comparisons of the extracted audio and video features to stored audio and video features with their associated labels obtained from one or more training video clips. In another example, a direction of sound propagation or a location of a sound source in a video clip may be determined based on the audio and video features extracted from the video clip and comparisons of the extracted audio and video features to stored audio and video features with their associated direction or location labels obtained from one or more training video clips. 1. A method of determining an identity of a speaker , comprising:extracting a first audio feature from a first audio content of a first video clip that includes a prescribed utterance of a first speaker who is identified by a speaker identifier;extracting a first video feature from a first video content of the first video clip that includes an image of the first speaker;obtaining an authentication signature based on the first audio feature and the first video feature;extracting a second audio feature from a second audio content of a second video clip that includes an utterance of a second speaker who is not pre-identified;extracting a second video feature from a second video content of the second video clip that includes an image of the second speaker;obtaining a signature of the second speaker based on the second audio feature and the second video feature; anddetermining whether the second speaker in the second video clip is the same as the first speaker in the first video clip based on a comparison between the signature of the second speaker and the authentication signature.2. The method of claim 1 , further comprising time-aligning the first ...

Подробнее
16-01-2020 дата публикации

METHOD AND APPARATUS FOR PASSENGER RECOGNITION AND BOARDING SUPPORT OF AUTONOMOUS VEHICLE

Номер: US20200019761A1
Принадлежит:

Disclosed is an apparatus for passenger recognition and boarding/alighting support of an autonomous vehicle. An apparatus for passenger recognition and boarding/alighting support of an autonomous vehicle according to an embodiment of the present disclosure may include a vehicle communicator configured to receive scheduled passenger information, a sensor configured to sense people outside the vehicle, and a vehicle controller configured to extract a passenger candidate group by analyzing the sensed people outside the vehicle, and calculate the number of reserved passengers using the extracted passenger candidate group and the received scheduled passenger information. One or more of an autonomous vehicle, a server, and a terminal of the present disclosure may be associated or combined with an artificial intelligence module, a drone (Unmanned Aerial Vehicle, UAV), a robot, an AR (Augmented Reality) device, a VR (Virtual Reality) device, a device associated with 5G network services, etc. 1. An apparatus for passenger recognition and boarding/alighting support of an autonomous vehicle , the apparatus comprising:a vehicle communicator configured to receive scheduled passenger information about a scheduled passenger from a user terminal or a stop terminal;a sensor configured to sense people within a predetermined area outside a vehicle on the basis of an appointed stop place; anda vehicle controller configured to extract a passenger candidate group by analyzing the people outside the vehicle sensed by the sensor and calculate the number of reserved passengers using the extracted passenger candidate group and the received scheduled passenger information.2. The apparatus of claim 1 , wherein the vehicle controller extracts the passenger candidate group by analyzing at least one of faces claim 1 , motions claim 1 , and movement directions of the people sensed in the predetermined area outside the vehicle.3. The apparatus of claim 2 , wherein the vehicle controller compares ...

Подробнее
17-04-2014 дата публикации

Tracking apparatus

Номер: US20140105454A1
Автор: Hisashi Yoneyama
Принадлежит: Olympus Imaging Corp

A tracking apparatus includes a grouping setting unit, a tracking feature detection unit, a tracking unit. The grouping setting unit groups a plurality of focus detection areas with an in-focus state. The tracking feature detection unit detects a feature amount of the tracking target in areas of the groups grouped. The tracking unit tracks the tracking target in accordance with a first or second tracking position depending on the number of the set groups.

Подробнее
16-01-2020 дата публикации

AUGMENTED REALITY PLANNING AND VIEWING OF DENTAL TREATMENT OUTCOMES

Номер: US20200020170A1
Принадлежит:

In an embodiment, a processing device receives image data of a face from an image capture device associated with an augmented reality (AR) display. The processing device processes the image data to a) identify a mouth in the image data, b) identify a dental arch in the mouth, and c) determine a position of the dental arch relative to a position of the AR display. The processing device determines a treatment outcome for the dental arch, generates a visual overlay comprising an indication of the treatment outcome at the determined position of the dental arch, and outputs the visual overlay to the AR display, wherein the visual overlay is superimposed over a view of the dental arch on the AR display. 1. A method comprising:receiving image data of a face from an image capture device associated with an augmented reality (AR) display;processing, by a processing device, the image data to a) identify a mouth in the image data, b) identify a dental arch in the mouth, and c) determine a position of the dental arch relative to a position of the AR display;determining, by the processing device, a treatment outcome for the dental arch;generating, by the processing device, a visual overlay comprising an indication of the treatment outcome at the determined position of the dental arch; andoutputting the visual overlay to the AR display, wherein the visual overlay is superimposed over a view of the dental arch on the AR display.2. The method of claim 1 , further comprising:tracking the position of the dental arch, a shape of the mouth, and exposed portions of the dental arch; andupdating the visual overlay in response to an update to at least one of the position of the dental arch, the shape of the mouth or the exposed portions of the dental arch.3. The method of claim 2 , wherein tracking the position of the dental arch comprises:determining an offset vector from the AR display to the position of the dental arch;identifying a change in the position of the AR display; andupdating ...

Подробнее
21-01-2021 дата публикации

Capturing Audio Impulse Responses of a Person with a Smartphone

Номер: US20210021946A1
Принадлежит:

A portable electronic device (PED) divides an area around a user into a three-dimensional (3D) zone. A wearable electronic device worn on a head of the user displays the 3D zone in response to the wearable electronic device detecting that the user is leaving the zone. The wearable electronic device plays binaural sound that emanates to the user from sound localization points (SLPs) inside the zone. 120.-. (canceled)21. A method comprising:dividing, with a portable electronic device (PED) held in a hand of a user, an area around the user into a zone that extends around the user and that includes a sound localization point (SLP) from where binaural sound in empty space originates to the user;playing, with speakers in a wearable electronic device (WED) worn on a head of the user, the binaural sound that originates from the SLP in empty space;tracking, with one or more sensors in the WED worn on the head of the user, the PED to determine when the PED held in the hand of the user is moving outside the zone that extends around the user; anddisplaying, with the WED worn on the head of the user, a virtual reality (VR) image that shows the zone in response to the WED determining that the PED is moving outside the zone.22. The method of further comprising:highlighting, while the user is located inside the zone and with a display of the WED, a VR image at the SLP in the zone when the PED held in the hand of the user is pointed at the SLP in the zone; andplaying, with the WED worn on the head of the user, the binaural sound that emanates from the SLP in response to the PED pointing at the SLP.23. The method of further comprising:displaying, with the WED worn on the head of the user, the VR image that shows a size and a shape of the zone in response to the WED determining that the PED is moving outside the zone.24. The method of further comprising:storing, in memory of the WED, a location of the zone, a size and a shape of the zone, and a location of the SLP in the zone from ...

Подробнее
22-01-2015 дата публикации

Method And Terminal For Associating Information

Номер: US20150026209A1
Автор: Xiang Yueyun
Принадлежит:

A method and a terminal for associating information, which relates to the field of computer technologies, is disclosed. The method includes obtaining image information, extracting facial feature information from the image information, and determining whether facial feature information corresponding to the facial feature information in the image information exists in contact information. The image information is associated with the matched contact information when the corresponding facial feature information is matched. Whether the facial feature information extracted from the image information exists in facial feature information that is stored in advance is determined. A contact corresponding to the facial feature information that is stored in advance is associated with the image information when the facial feature information exists so that automatic association between image information and contact information is implemented, which saves setting time for a user and improves user experience. 1. A method for associating information , comprising:obtaining image information;extracting facial feature information from the image information;determining whether facial feature information corresponding to the facial feature information in the image information exists in contact information; andassociating the image information with the matched contact information when the corresponding facial feature information is matched.2. The method according to claim 1 , wherein the obtaining image information comprises obtaining locally stored image information claim 1 , or obtaining image information input by an imaging device.3. The method according to claim 1 , wherein determining whether facial feature information corresponding to the facial feature information in the image information exists in contact information claim 1 , and associating the image information with the matched contact information when the corresponding facial feature information is matched comprises:matching ...

Подробнее
26-01-2017 дата публикации

LIVENESS DETECTOR FOR FACE VERIFICATION

Номер: US20170024608A1
Автор: Kons Zvi
Принадлежит:

A method, product and system for implementing liveness detector for face verification. A method comprising detecting a symmetry line of the face; and verifying that the subject moved the mouth by computing a score based on values of a pair of images in the symmetry lines, wherein the score is indicative to a difference in the shape of the mouth between the pair of images. Another method comprises: verifying identity of a subject based on facial recognition and voice recognition, said verifying comprise determining there is mouth movement in an image sequence, wherein said determining comprises: in each image of the sequence, detecting a symmetry line of the face; and verifying that the subject moved the mouth, wherein said verifying comprises: computing a score based on comparison of symmetry lines of the face in different images of the set of images; and comparing the score with a threshold. 1. A computer-implemented method comprising:obtaining a set of images of a face of a subject having a mouth, wherein the set of images are used for facial recognition of the subject;in at least two images of the set of images, automatically detecting a symmetry line of the face, wherein the symmetry line intersects at least a mouth region of the face; and computing a score based on values of a pair of images in the symmetry lines, wherein the score is indicative to a difference in the shape of the mouth between the pair of images; and', 'determining the score is above a threshold., 'automatically verifying that the subject moved the mouth, wherein said verifying comprises2. The computer-implemented method of claim 1 , wherein said computing the score is performed with respect to the mouth region.3. The computer-implemented method of claim 1 ,wherein said automatically detecting a symmetry line is performed with respect to each image of the set of images;wherein the score is an aggregated score, wherein said computing the aggregated score comprises computing, for each pair of ...

Подробнее
28-01-2016 дата публикации

Apparatus for Automated Monitoring of Facial Images and a Process Therefor

Номер: US20160026855A1

Provided herein is an apparatus for automated monitoring of facial images. The apparatus includes a cabinet having at least one video capturing device for continuously capturing video. The apparatus also includes means for analyzing frames to identify human facial images and for cropping facial images with data and time information, if detected, and at least one means for instantaneously transmitting the cropped facial images with date and time to at least one predefined storage unit operating in an unattended mode. The predefined storage unit is operatively connected to the cabinet. The apparatus can be configured for headless startup by means of an in built application software. A process for automated monitoring of facial images for surveillance purposes in a monitoring apparatus is also provided.

Подробнее
26-01-2017 дата публикации

Image processing method

Номер: US20170024626A1
Автор: Yasushi Inaba
Принадлежит: Canon Imaging Systems Inc

An image processing method for a picture of a participant, photographed in an event, such as a marathon race, increases the accuracy of recognition of a race bib number by performing image processing on a detected race bib area, and associates the recognized race bib number with a person included in the picture. This image processing method detects a person from an input image, estimates an area in which a race bib exists based on a face position of the detected person, detects an area including a race bib number from the estimated area, performs image processing on the detected area to thereby perform character recognition of the race bib number from an image subjected to image processing, and associates the result of character recognition with the input image.

Подробнее