Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 410. Отображено 100.
07-01-2021 дата публикации

ATTRIBUTE RECOGNITION SYSTEM, LEARNING SERVER AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Номер: US20210004568A1
Автор: TSUCHIDA YASUHIRO, UNO Reo
Принадлежит: AWL, Inc.

An attribute recognition system has a person face detection circuitry to detect a suitable person or face for recognition of at least one attribute from persons or faces captured in frame images input from at least one camera to capture a given capture area, an identification information assignment circuitry to identify the persons or faces captured in the frame images having been subjected to the detection by the person face detection circuitry so as to assign an identification information to each identified person or face, and an attribute recognition circuitry to recognize the attribute of a person or face assigned with the identification information, only if the person or face is yet without being subjected to recognition of the attribute, and at the same time if the person or face has been detected by the person face detection circuitry as a suitable person or face for the recognition of the attribute. 1. An attribute recognition system comprising:a person face detection circuitry configured to detect a suitable person or face for recognition of at least one attribute from persons or faces captured in frame images input from at least one camera to capture a given capture area;an identification information assignment circuitry configured to identify the persons or faces captured in the frame images having been subjected to the detection by the person face detection circuitry so as to assign an identification information to each identified person or face; andan attribute recognition circuitry configured to recognize the at least one attribute of a person or face assigned with the identification information, only if the person or face is yet without being subjected to recognition of the at least one attribute, and at the same time if the person or face has been detected by the person face detection circuitry as a suitable person or face for the recognition of the at least one attribute.2. The attribute recognition system according to claim 1 ,wherein the person ...

Подробнее
07-01-2021 дата публикации

METHODS AND APPARATUS FOR MULTI-TASK RECOGNITION USING NEURAL NETWORKS

Номер: US20210004572A1
Принадлежит:

Methods and apparatus for multi-task recognition using neural networks are disclosed. An example apparatus includes a filter engine to generate a facial identifier feature map based on image data, the facial identifier feature map to identify a face within the image data. The example apparatus also includes a sibling semantic engine to process the facial identifier feature map to generate an attribute feature map associated with a facial attribute. The example apparatus also includes a task loss engine to calculate a probability factor for the attribute, the probability factor identifying the facial attribute. The example apparatus also includes a report generator to generate a report indicative of a classification of the facial attribute. 1. An apparatus to perform multi-task recognition comprising:a filter engine to generate a facial identifier feature map based on image data, the facial identifier feature map to identify a face within the image data;a sibling semantic engine to process the facial identifier feature map to generate an attribute feature map associated with a facial attribute;a task loss engine to calculate a probability factor for the attribute, the probability factor identifying the facial attribute; anda report generator to generate a report indicative of a classification of the facial attribute.2. The apparatus of claim 1 , wherein the filter engine to generate the facial identifier map using a phase-convolution engine claim 1 , a phase-residual engine and an inception-residual engine.3. The apparatus of claim 1 , wherein the sibling semantic engine to generate the attribute feature map using at least one of a face semantic engine claim 1 , a local-part semantic engine or a hybrid coupled semantic engine.4. The apparatus of claim 3 , wherein the face semantic engine to convolve the facial identifier feature map to identify at least gender or age.5. The apparatus of claim 4 , wherein the face semantic engine to convolve the facial identifier map ...

Подробнее
03-01-2019 дата публикации

COMPUTER SYSTEM, DIALOGUE CONTROL METHOD, AND COMPUTER

Номер: US20190005311A1
Принадлежит:

A computer system that performs in dialogue with a user and provides a prescribed service, comprising: an imaging device; a computer; and a generation device generating dialogue content on a basis of an algorithm for generating dialogue content. The computer couples to a database that stores an authentication image used for an authentication process that uses an image. The computer calculates a distance between the user and the imaging device; executes an attribute estimation process in a case where the distance is larger than a threshold, selects the algorithm on the basis of results of the attribute estimation process, and issues a notification of the selected algorithm to the generation device. 1. A computer system that performs in dialogue with a user and provides a prescribed service , comprising:an imaging device being configured to obtain an image;a computer being configured to select an algorithm for generating dialogue content to be outputted to the user; anda generation device being configured to generate dialogue content on the basis of the algorithm,the computer having an arithmetic device, a storage device coupled to the arithmetic device, and an interface coupled to the arithmetic device, and coupling, through the interface, to a database that stores an authentication image used for an authentication process that uses an image obtained by the imaging device,the arithmetic device being configured to:calculate a distance between the user and the imaging device;execute an attribute estimation process that estimates an attribute that characterizes the user using the image obtained by the imaging device in a case where the distance is larger than a first threshold, and select the algorithm on the basis of results of the attribute estimation process;execute an authentication process that identifies the user on the basis of the image obtained by the imaging device and the database in a case where the distance is less than or equal to the first threshold, and ...

Подробнее
03-01-2019 дата публикации

SCENE AND ACTIVITY IDENTIFICATION IN VIDEO SUMMARY GENERATION

Номер: US20190005333A1
Принадлежит:

Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities. 1. A method for identifying video scenes , the method comprising:accessing a video of an activity, the activity including an event at a moment within the video;obtaining an identification of a type of the activity;obtaining an identification of a type of the event; the length is a first length based on the type of the activity being of a first activity type and the type of the event being of a first event type;', 'the length is a second length based on the type of the activity being of the first activity type and the type of the event being of a second event type;', 'the length is a third length based on the type of the activity being of a second activity type and the type of the event being of the first event type; and', 'the length is a fourth length based on the type of the activity being of the second activity type and the type of the event being of the second event type,', the first length is different from the second length, the third length, and the fourth length;', 'the second length is different from the third length and the fourth length; and', 'the third length is different from the fourth length; and, 'wherein], 'identifying a scene of the video for the event, the scene including a length of ...

Подробнее
03-01-2019 дата публикации

MOBILE TERMINAL AND METHOD FOR CONTROLLING SAME

Номер: US20190005571A1
Принадлежит: LG ELECTRONICS INC.

The present invention relates to a mobile terminal and a method for controlling the same. The mobile terminal according to the present invention comprises: a display unit configured to display a preview image of a specific object in a camera photographing mode; and a controller configured to: enter a first mode for displaying a preview image focused on the specific object based on a preset initial first user input being applied, and displaying information related to the specific object based on a preset subsequent first user input being applied in the camera photographing mode, enter a second mode for capturing the preview image, and then storing the captured image and the information related to the specific object in a preset folder, and displaying a plurality of captured images stored in the folder and information. 120-. (canceled)21. A mobile terminal , comprising:a display unit configured to display a preview image of a specific object in a camera photographing mode; and enter a first mode for displaying a preview image focused on the specific object based on a preset initial first user input being applied, and displaying information related to the specific object based on a preset subsequent first user input being applied in the camera photographing mode,', 'enter a second mode for capturing the preview image, and then storing the captured image and the information related to the specific object in a preset folder, and displaying a plurality of captured images stored in the folder and information corresponding to each of the plurality of captured images based on a preset second user input being applied in the first mode, and', 'enter a third mode for capturing the preview image, and then displaying screen information for purchasing the specific object based on a preset third user input being applied in the first mode., 'a controller configured to22. The mobile terminal of claim 21 , wherein the controller enters the first mode based on a touch input being ...

Подробнее
04-01-2018 дата публикации

Systems and Methods for Assessing Viewer Engagement

Номер: US20180007431A1
Принадлежит:

A system for quantifying viewer engagement with a video playing on a display includes at least one camera to acquire image data of a viewing area in front of the display. A microphone acquires audio data emitted by a speaker coupled to the display. The system also includes a memory to store processor-executable instructions and a processor. Upon execution of the processor-executable instructions, the processor receives the image data and the audio data and determines an identity of the video displayed on the display based on the audio data. The processor also estimates a first number of people present in the viewing area and a second number of people engaged with the video. The processor further quantifies the viewer engagement of the video based on the first number of people and the second number of people. 1. A method of quantifying viewer engagement with a video shown on a display , the method comprising:acquiring, with at least one camera, images of a viewing area in front of the display while the video is being shown on the display;acquiring, with a microphone, audio data representing a soundtrack of the video emitted by a speaker coupled to the display;determining, with a processor operably coupled to the at least one camera and the processor, an identity of the video based at least in part on the audio data;estimating, with the processor and based at least in part on the image data, a first number of people present in the viewing area while the video is being shown on the display and a second number of people engaged with the video in the viewing area; andtransmitting, by the processor, the identity of the video, the first number of people, and the second number of people to a remote server.2. The method of claim 1 , wherein acquiring the images comprises acquiring a first image of the viewing area using a visible camera and acquiring a second image of the viewing area using an infrared (IR) camera.3. The method of claim 2 , wherein estimating the first ...

Подробнее
14-01-2021 дата публикации

FACE REENACTMENT

Номер: US20210012090A1
Принадлежит:

Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video. 1. A method for face reenactment , the method comprising:receiving, by a computing device, a target video, the target video including at least one target frame, the at least one target frame including a target face;receiving, by the computing device, a scenario including a series of source facial expressions;determining, by the computing device and based on the target face in the at least one frame of the target video, one or more target facial expressions;synthesizing, by the computing device and using a parametric face model and a texture model, an output face, the output face including the target face, wherein the one or more target facial expressions are modified to imitate the source facial expressions of the series of source facial expressions;generating, by the computing device and based on a deep neural network (DNN) and at least one previous frame of the target video, a mouth region and an eyes region; andcombining, by the computing device, the output face, the mouth region, and the eyes region to generate a frame of an output video.2. The method of claim 1 , wherein the parametric face model depends on a facial expression claim 1 , a facial identity claim 1 , and a facial texture.3. The method of claim 1 , wherein the parametric face model includes ...

Подробнее
09-01-2020 дата публикации

HEIGHT CALCULATION SYSTEM, INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM

Номер: US20200013180A1
Принадлежит: FUJI XEROX CO., LTD.

A height calculation system includes a capturing section that captures an image, a detection section that detects a specific body part of a human in the image captured by the capturing section, and a calculation section that calculates the height of the human based on the size of the part when the one part detected by the detection section overlaps with a specific area present in the image. 1. A height calculation system comprising:a capturing section that captures an image;a detection section that detects a specific body part of a human in the image captured by the capturing section; anda calculation section that calculates a height of the human based on a size of the part when the part detected by the detection section overlaps with a specific area present in the image.2. The height calculation system according to claim 1 , further comprising:an acquiring section that acquires part information related to the size of the part when the specific body part detected by the detection section overlaps with the specific area; anda determination section that determines a size to be associated with a predetermined height based on the part information acquired by the acquiring section,wherein the calculation section calculates the height of the person based on a relationship between the size of the part when the one part overlaps with the specific area and the size associated with the predetermined height.3. The height calculation system according to claim 2 , further comprising:an estimation section that estimates an attribute of the person related to the part information,wherein the determination section determines each size to be associated with a height determined for each attribute, andthe calculation section calculates the height of the person based on a relationship between the size of the part when the one part overlaps with the specific area and a size that is the closest to the size of the one part among the sizes associated with the height for each attribute.4. ...

Подробнее
21-01-2016 дата публикации

CONTENT PRESENTATION IN HEAD WORN COMPUTING

Номер: US20160018640A1
Принадлежит:

Aspects of the present invention relate to providing assistance to medical professionals during the performance of medical procedures through the use of technologies facilitated through a head-worn computer. 1. A head-worn computer , comprising:a. A forward-facing camera adapted to capture images of an environment proximate the head-worn computer;b. A processor adapted to cause the camera to capture an image of a person's face, wherein the processor is further adapted to initiate a facial recognition process based on the image to verify that the person is a known medical patient with a scheduled known medical procedure; andc. The processor further adapted to display an indication that the person is the known person with the scheduled known medical procedure in a see-through display of the head-worn computer.2. The head-worn computer of claim 1 , wherein the facial recognition process is performed remote from the head-worn computer.3. The head-worn computer of claim 1 , wherein the facial recognition process is performed by the processor.4. The head-worn computer of claim 1 , further comprising an eye-imaging system adapted to verify an identity of a person wearing the head-worn computer to maintain confidential the displayed indication.5. The head-worn computer of claim 4 , wherein the person wearing the head-worn computer is verified as a medical professional. This application claims the benefit of priority to and is a continuation of U.S. non-provisional application Ser. No. 14/331,481, filed Jul. 15, 2014.The above application is hereby incorporated by reference in its entirety.1. Field of the InventionThis invention relates to head worn computing. More particularly, this invention relates to technologies used in connection with medical procedures with the assistance of head worn computing.2. Description of Related ArtWearable computing systems have been developed and are beginning to be commercialized. Many problems persist in the wearable computing field that ...

Подробнее
17-01-2019 дата публикации

INTEGRATED SYSTEM FOR DETECTION OF DRIVER CONDITION

Номер: US20190019068A1
Принадлежит:

Methods, apparatus, and systems are provided for integrated driver expression recognition and vehicle interior environment classification to detect driver condition for safety. A method includes obtaining an image of a driver of a vehicle and an image of an interior environment of the vehicle. Using a machine learning method, the images are processed to classify a condition of the driver and of the interior environment of the vehicle. The machine learning method includes general convolutional neural network (CNN) and CNN with adaptive filters. The adaptive filters are determined based on influence of filters. The classification results are combined and compared with predetermined thresholds to determine if a decision can be made based on existing information. Additional information is requested by self-motivated learning if a decision cannot be made, and safety is determined based on the combined classification results. A warning is provided to the driver based on the safety determination. 1. A computer-implemented warning method , comprising:obtaining, by one or more processors, at least one image of a driver of a vehicle;obtaining, by the one or more processors, at least one image of an interior environment of the vehicle;classifying, by the one or more processors, a condition of the driver by machine learning method, based on information from the at least one image of the driver;classifying, but the one or more processors, a condition of the interior environment of the vehicle using the machine learning method, based on information from the at least one image of the interior environment of the vehicle;combining, by the one or more processors, classification results from the at least one image of the driver and classification results from the at least one image of the interior environment of the vehicle;comparing, by the one or more processors, the combined classification results with predefined thresholds to determine if a decision can be made based on existing ...

Подробнее
16-01-2020 дата публикации

SYSTEM, APPARATUS AND METHOD FOR PROVIDING SERVICES BASED ON PREFERENCES

Номер: US20200021886A1
Принадлежит: LG ELECTRONICS INC.

Disclosed is a preference-based service providing method for operating preference-based service providing system and device by executing an artificial intelligence (AI) algorithm and/or a machine learning algorithm in a 5G environment connected for the Internet of things. A preference-based service providing method according to an embodiment of the present disclosure may include acquiring user video information obtained by imaging a user who is using an electronic device, analyzing a preference of the user for a service provided by the electronic device on the basis of the user video information including a face image and a posture image of the user, setting a priority of the service provided by the electronic device on the basis of the preference of the user, and providing a recommendation list of services provided by the electronic device on the basis of priorities of the services. 1. A preference-based service providing method , comprising:Acquiring, by a controller, user video information obtained by imaging a user who is using an electronic device;analyzing a preference of the user for a service provided by the electronic device on a basis of the user video information comprising a face image and a posture image of the user;setting a priority of the service provided by the electronic device on a basis of the preference of the user; andproviding a recommendation list of services provided by the electronic device on a basis of priorities of the services.2. The preference-based service providing method of claim 1 , wherein the analyzing of the preference of the user comprises:extracting the face image and the posture image of the user from the user video information; andanalyzing the face image and the posture image of the user to extract user characteristic information comprising at least one of an emotions state, an attention, or an age and a gender.3. The preference-based service providing method of claim 2 , wherein the analyzing of the preference of the user ...

Подробнее
24-01-2019 дата публикации

BRAKE PREDICTION AND ENGAGEMENT

Номер: US20190023208A1
Принадлежит: FORD GLOBAL TECHNOLOGIES, LLC

A computing device in a vehicle, programmed to predict a collision risk by comparing an acquired occupant facial expression to a plurality of stored occupant facial expressions, and, brake a vehicle based on the collision risk. The computing device can be programmed to predict a collision risk by determining a number of seconds until a negative event, including a collision, a near-miss, or vehicle miss-direction, is predicted to occur at a current vehicle trajectory. 1. A method , comprising:predicting a collision risk by comparing an acquired occupant facial expression to a plurality of previously acquired occupant facial expressions; andbraking a vehicle based on the collision risk.2. The method of claim 1 , further comprising predicting the collision risk by determining a number of seconds until a negative traffic event that includes a collision claim 1 , a near-miss claim 1 , or vehicle miss-direction is predicted to occur at a current vehicle trajectory.3. The method of claim 2 , wherein the current vehicle trajectory includes a speed claim 2 , a direction claim 2 , and steering torque.4. The method of claim 3 , further comprising acquiring the occupant facial expression by acquiring video data including an occupant's face and extracting features from the video data that represent the occupant facial expression.5. The method of claim 4 , further comprising extracting features from the video data including determining an occupant's gaze direction and comparing the occupant's gaze direction with a direction to the negative traffic event.6. The method of claim 5 , wherein comparing the occupant facial expression to the previously acquired occupant facial expressions includes processing the occupant facial expression with a machine learning program.7. The method of claim 6 , wherein the previously acquired facial expressions are associated with negative traffic events including their proximity in time to negative traffic events.8. The method of claim 7 , further ...

Подробнее
25-01-2018 дата публикации

Information output device, camera, information output system, information output method, and program

Номер: US20180025175A1
Автор: Akira Kato
Принадлежит: NEC Corp

As information output device includes: a first output unit that outputs acquired information acquired by a sensor; and a second output unit that converts personal information included in the acquired information into attribute information from which identification of an individual is impossible, and outputs the attribute information.

Подробнее
25-01-2018 дата публикации

VIDEO SENTIMENT ANALYSIS TOOL FOR VIDEO MESSAGING

Номер: US20180025221A1
Принадлежит:

Embodiments of the invention provide a method, system and computer program product for video sentiment analysis in video messaging. In an embodiment of the invention, a method for video sentiment analysis in video messaging includes receiving different video contributions to a thread in a social system executing in memory of a computer and sensing from a plurality of the video contributions a contributor sentiment. Thereafter, a sentiment value for the different video contributions is computed and a sentiment value for a selected one of the video contributions is displayed in a user interface to the thread for an end user contributing a new video contribution to the thread. 1. A method for video sentiment analysis in video messaging , the method comprising:receiving different video contributions to a thread in a social system executing in memory of a computer;sensing from a plurality of the video contributions a contributor sentiment;computing a sentiment value for the sensed contributor sentiment of the different video contributions; and,displaying a sentiment value computed for a selected one of the video contributions in a user interface to the thread for an end user contributing a new video contribution to the thread.2. The method of claim 1 , wherein the sensing is facial recognition in order to determine an emotion of a contributor of a corresponding one of the video contributions.3. The method of claim 1 , wherein the sensing is a sound analysis in order to determine a tone of a contributor of a corresponding one of the video contributions.4. The method of claim 1 , wherein the sensing is a textual parsing of content of each of the video contributions in order to determine a level of jargon of a corresponding one of the video contributions.5. The method of claim 1 ,wherein the sensing is each of facial recognition in order to determine an emotion of a contributor of each of the video contributions, a sound analysis in order to determine a tone of a ...

Подробнее
23-01-2020 дата публикации

ADAPTIVE VIDEO ADVERTISING USING EAS PEDESTALS OR SIMILAR STRUCTURE

Номер: US20200027135A1
Принадлежит: SENSORMATIC ELECTRONICS, LLC

An EAS pedestal system includes a video display device and a video camera supported on an EAS pedestal housing. The video camera is oriented to facilitate capture of video images. The video images include facial images associated with a plurality of human faces under conditions where the human faces are positioned to observe the video display device. The system applies an analytical process to determine when an imaged human face is looking toward the video display device. The system then determines one or more human demographic traits based on the facial images and adaptively selects advertising content to be displayed based on information derived from the analytical process. 1. An EAS pedestal system for directed advertising in a retail store environment , comprising:a pedestal housing in which at least one EAS sensor is provided;at least one video display device supported on the pedestal housing;at least one video camera supported on the pedestal housing in a location and orientation that facilitates capture of a plurality of video images including facial images associated with a plurality of human faces under conditions where the human faces are positioned to observe the video display device; anda computer processing system configured toapply an analytical process to the plurality of video images provided by the at least one video camera to identify instances of the one or more of the facial images where the human face contained therein is at least one of passing in front of or looking toward the at least one video display device,determine at least one human demographic trait data based on each of the facial images; andadaptively determine advertising content to be displayed on the video display device based on information derived from the analytical process.2. The EAS pedestal system of claim 1 , wherein the computer processing system is configured to use the human demographic trait data from the plurality of facial images to determine demographic statistical ...

Подробнее
02-02-2017 дата публикации

METHOD AND DEVICE FOR DETERMINING ASSOCIATED USER

Номер: US20170032180A1
Принадлежит:

A method and device for determining an associated user are provided. The method includes: acquiring a face album including face sets of multiple users; determining a target user in the face album; selecting, from the face album, at least one associated-user candidate of the target user; acquiring attribute information of the at least one associated-user candidate, and determining an associated user of the target user according to the attribute information; and setting tag information for the associated user. 1. A method for a device to determine an associated user , comprising:acquiring a face album including face sets of multiple users;determining a target user in the face album;selecting, from the face album, at least one associated-user candidate of the target user;acquiring attribute information of the at least one associated-user candidate, and determining an associated user of the target user according to the attribute information; andsetting tag information for the associated user.2. The method according to claim 1 , wherein the selecting claim 1 , from the face album claim 1 , of the at least one associated-user candidate of the target user comprises:acquiring face-source photos of each user in the face album;comparing face-source photos of the target user with face-source photos of other users in the face album; anddetermining the at least one associated-user candidate based on a comparison result, wherein the at least one associated-user candidate has more than a preset number of same face-source photos as the target user has.3. The method according to claim 1 , wherein the acquiring of the attribute information of the at least one associated-user candidate and the determining an associated user of the target user according to the attribute information comprises:acquiring age and gender information of a first associated-user candidate;when the first associated-user candidate does not meet an age requirement, discarding the first associated-user candidate ...

Подробнее
01-02-2018 дата публикации

FACIAL RECOGNITION ENCODE ANALYSIS

Номер: US20180032795A1
Автор: Aas Cecilia J.
Принадлежит:

A method for facial recognition encode analysis comprises providing a training set of Gabor encoded arrays of face images from a database; and, for each encode array in the training set, evaluating the Gabor data to determine the accuracy of the fiducial points on which the encode array is based. The method also comprises training an outlier detection algorithm based on the evaluation of the encode arrays to obtain a decision function for a strength of accuracy of fiducial points in the encode arrays; and outputting the decision function for application to an encode array to be tested. 1. A computer-implemented method for facial recognition encode analysis , comprising:providing a training set of Gabor encoded arrays of face images from a database;for each encode array in the training set, evaluating the Gabor data to determine the accuracy of the fiducial points on which the encode array is based;training an outlier detection algorithm based on the evaluation of the encode arrays to obtain a decision function for a strength of accuracy of fiducial points in the encode arrays; andoutputting the decision function for application to an encode array to be tested.2. The method as claimed in claim 1 , wherein evaluating the Gabor data to determine the accuracy of the fiducial points on which the encode array is based includes determining an average across multiple sampling locations of a face image of a Gabor response for multiple Gabor wavelet orientations relative to an eye-to-eye horizontal line of the face image.36. The method as claimed in claim 2 , wherein the multiple Gabor wavelet orientations are where angle θ=0 claim 2 , pi/6 claim 2 , pi/3 claim 2 , pi/2 claim 2 , 2*pi/3 claim 2 , and 5*pi/.4. The method as claimed in claim 1 , wherein evaluating the Gabor data to determine the accuracy of the fiducial points on which the encode array is based includes determining a set of correlations between Gabor wavelets of different frequencies at pairs of sampling ...

Подробнее
02-02-2017 дата публикации

METHOD AND APPARATUS FOR RECOMMENDING CONTACT INFORMATION

Номер: US20170034324A1
Принадлежит: Xiaomi Inc.

The present disclosure, pertaining to the field of terminal technologies, relates to a method and apparatus for recommending contact information. The method may include: determining an age of an owner of a mobile terminal; determining, based on the determined age of the owner and stored contact photos of a plurality of contacts, a candidate contact from the plurality of contacts; and displaying, in response to receiving a share instruction with respect to a designated image, contact information of the determined candidate contact, the designated image being any image in a photo album to be shared. According to the present disclosure, one or more candidate contacts are determined from a plurality of contacts, and upon receipt of a share instruction with respect to a designated image, contact information of the determined candidate contact is displayed. This may improve speed and accuracy of searching for contact information, reduce search time, and enhance image sharing efficiency. 1. A method for recommending contact information , comprising:determining, by a processor, an age of an owner of a mobile terminal;determining, by a processor, based on the determined age of the owner and stored contact photos of a plurality of contacts, a candidate contact from the plurality of contacts; anddisplaying, by a processor, in response to receiving a share instruction with respect to a designated image, contact information of the determined candidate contact, the designated image being any image in a photo album to be shared.2. The method of claim 1 , wherein the determining of the age of the owner comprises:acquiring a face image set of the owner;obtaining a respective age corresponding to each face image in the face image set of the owner by performing age identification for each face image in the face image set of the owner; anddetermining the age of the owner based on the respective age corresponding to each face image in the face image set of the owner.3. The method of ...

Подробнее
30-01-2020 дата публикации

FACIAL MODELLING AND MATCHING SYSTEMS AND METHODS

Номер: US20200034604A1
Принадлежит:

A matching apparatus for characterising the human face in order to facilitate the search for people with similar faces. The apparatus uses 3D modelling of a variety of image sources including video to characterise a subject's face using a set of parameters. These parameters are then used to identify other people or image sources which have a set of parameters which are similar to the subject's. Feedback from the users is used to improve future matching. 1. A matching apparatus comprising a processor and memory , the processor and memory configured to:receive multiple subject image sources of a subject person including at least one video;process the subject image sources using 3D facial modelling to determine a set of facial characteristics of the subject person;perform a similarity search for image sources of faces which are similar to the subject's face based on the determined set of facial characteristics;in response to the similarity search, identify one or more matching image sources of one or more persons that are similar in appearance to the subject person; andprovide search results based on the identified one or more matching image sources.2. The matching apparatus of claim 1 , wherein the apparatus is configured to:determine an overall similarity score based on the degree of similarity of each of the determined set of facial characteristics, wherein the contribution to the overall similarity score for each facial characteristic is weighted by a weighting value associated with the relative importance of the associated facial characteristic in determining the overall similarity score; andadjust the weighting values based on user input.3. The matching apparatus of claim 2 , wherein the user input comprises one or more of:the subject requesting a digital connection with at least one of the identified matches; andestablishing a digital connection between the subject and at least one of the identified matches.4. The matching apparatus of claim 2 , wherein the user ...

Подробнее
30-01-2020 дата публикации

SYSTEM AND METHOD OF ANALYZING FEATURES OF THE HUMAN FACE AND BREASTS USING ONE OR MORE OVERLAY GRIDS

Номер: US20200034979A1
Принадлежит:

The present invention generally relates to human feature analysis. Specifically, embodiments of the present invention relate to a system and method for utilizing one or more overlay grids in conjunction with imagery of a human face or breast area in order to analyze beauty and attractiveness of the face or breast area in the underlying imagery. In an exemplary embodiment, the system utilizes computerized image capture features and processing features to analyze a human face or breast area in relation to a plurality of overlay grids in order to identify and empirically measure beauty and attractiveness based on the alignment of said overlay grids with specific features of the human face or breast area and whether a successful fit exists with specifically defined facial or breast grids or by how close the individual's features align with specifically defined facial or breast grids. 1. A system for analyzing the attractiveness of a human breast area , the system comprising:an image capture and processing module, comprising computer-executable code stored in non-volatile memory,an overlay retrieval and analysis module, comprising computer-executable code stored in non-volatile memory,a processor, anda display element, receive image data associated with one or more images of the human breast area;', 'process said one or more images of the human breast area in order to identify a plurality of breast features to be analyzed;', 'generate and associate placement points for each identified breast feature;', 'group the placement points into respective breast zones, where each breast zone comprises one or more identified breast features, wherein each breast zone is configured as one or more quadrilateral grids having dimensions that are defined by said placement points;', 'retrieve one or more quadrilateral breast overlay grids for each breast zone, wherein the quadrilateral breast overlay grids are selected from a group of specifically defined breast grid overlays; align each ...

Подробнее
08-02-2018 дата публикации

PERSISTING IMAGE MODIFICATIONS FOR USER PROFILE

Номер: US20180040110A1
Принадлежит:

Aspects saves modifications made to a depiction of a person within a photographic image uploaded to a networked service. In response to determining a presence of another depiction of the identified person in a different photographic image uploaded to the networked service, the modification saved to the profile data is automatically applied to another depiction of the identified person within the different photographic image to an initial publication of the uploaded, different photographic image on the networked service. 1. A computer-implemented method for persistent depiction modification across multiple images , comprising executing on a computer processor the steps of:in response to an upload to a networked service of first photographic data comprising a first photographic image, making a modification to a depiction of a person within the first photographic image;determining an identity of a person of the modified depiction;saving the modification to the depiction of the person within the first photographic image to profile data of the identified person;determining a presence of another depiction of the identified person in a second photographic image that is uploaded to the networked service, wherein the second photographic image is different from the first photographic image, and the another depiction of the identified person in the second photographic image is different from the depiction of the identified person within the first photographic image; andautomatically applying the modification to the depiction of the person within the first photographic image that is saved to the profile data of the identified person to the another depiction of the identified person within the uploaded second photographic image prior to an initial publication of the uploaded second photographic image on the networked service.2. The method of claim 1 , further comprising:integrating computer-readable program code into a computer system comprising a processor, a computer readable ...

Подробнее
04-02-2021 дата публикации

Audio Adjustment System

Номер: US20210037318A1
Принадлежит:

An audio adjustment system for a vehicle includes a detection module that is configured to receive an image and detect a face of at least one of a passenger of the vehicle and a pedestrian near the vehicle. An age estimation module is configured to estimate an age of the at least one of the passenger and the pedestrian based on the detected face. An adjustment determination module is configured to determine an audio adjustment parameter for an audio signal based on the estimated age of the at least one of the passenger and the pedestrian. 1. An audio adjustment system for a vehicle , comprising:a detection module that is configured to receive an image and detect a face of at least one of a passenger of the vehicle and a pedestrian nearby the vehicle;an age estimation module that is configured to estimate an age of the at least one of the passenger and the pedestrian based on the detected face;a gender estimation module that is configured to estimate a gender of the at least one of the passenger and the pedestrian based on the detected face; andan adjustment determination module that is configured to determine an audio adjustment parameter for an audio signal based on the estimated age and the estimated gender of the at least one of the passenger and the pedestrian.2. (canceled)3. The audio adjustment system as recited in claim 1 , wherein the adjustment determination module is further configured to send the audio adjustment parameter to an audio generating device that outputs the audio signal.4. The audio adjustment system as recited in claim 3 , wherein the vehicle includes a plurality of sensors that are configured to detect presence of the at least one of the passenger and the pedestrian.5. The audio adjustment system as recited in claim 4 , wherein the detection module determines a location of the at least one of the passenger and the pedestrian based on signals received from the plurality of sensors.6. The audio adjustment system as recited in claim 5 , wherein ...

Подробнее
06-02-2020 дата публикации

FACE IMAGE PROCESSING METHODS AND APPARATUSES, AND ELECTRONIC DEVICES

Номер: US20200042769A1
Принадлежит: SHENZHEN SENSETIME TECHNOLOGY CO., LTD.

A face image processing method includes: performing face detection on an image to be processed, and obtaining at least one face region image included in the image to be processed and face attribute information in the at least one face region image; and for the at least one face region image, processing an image corresponding to a first region and/or an image corresponding to a second region in the face region image at least according to the face attribute information in the face region image, wherein the first region is a skin region, and the second region includes at least a non-skin region. 1. A face image processing method , comprising:performing face detection on an image to be processed, and obtaining at least one face region image comprised in the image to be processed and face attribute information in the at least one face region image; andfor the at least one face region image, processing, at least according to the face attribute information in the face region image, at least one of an image corresponding to a first region in the face region image or an image corresponding to a second region in the face region image, wherein the first region is a skin region, and the second region comprises at least a non-skin region.2. The method according to claim 1 , whereinthe method further comprises: obtaining face key-point information in the at least one face region image;the for the at least one face region image, processing, at least according to the face attribute information in the face region image, at least one of an image corresponding to a first region in the face region image or an image corresponding to a second region in the face region image comprises: for the at least one face region image, processing, according to the face attribute information and the face key-point information in the face region image, at least one of the image corresponding to the first region in the face region image or the image corresponding to the second region in the face region ...

Подробнее
18-02-2021 дата публикации

System And Method For Scalable Cloud-Robotics Based Face Recognition And Face Analysis

Номер: US20210049349A1
Принадлежит:

A system and method for performing distributed facial recognition divides processing steps between a user engagement device/robot, having lower processing power, and a remotely located server, having significantly more processing power. Images captured by the user engagement device/robot are processed at the device/robot by applying a first set of image processing steps that includes applying a first face detection. First processed images having at least one detected face is transmitted to the server, whereat a second set of image processing steps are applied to determine a stored user facial image matching the detected face of the first processed image. At least one user property associated to the given matching user facial image is then transmitted to the user engagement device/robot. An interactive action personalized to the user can further be performed at the user engagement device/robot. 1. A system for performing distributed facial analysis comprising: controlling the image capture device to capture an image of a scene;', 'applying a first set of one or more image processing steps to the captured image to selectively output at least a first processed image, the first set of image processing steps comprising applying a first face detection to detect at least one face in the captured image and the first processed image having the detected at least one face; and', 'transmitting the first processed image by the communication device; and, 'a computerized device having an image capture device, a communication device, and a first processor configured for receiving the first processed image transmitted from the computerized device;', 'applying a second set of image processing steps to determine a given one of the stored user facial images matching the face of the first processed image; and', 'transmitting the at least one user property associated to the given matching user facial image to the computerized device; and, 'a server located remotely of the computerized ...

Подробнее
18-02-2016 дата публикации

Embedding Biometric Data From a Wearable Computing Device in Metadata of a Recorded Image

Номер: US20160048722A1
Принадлежит:

According to an example method implemented by an imaging device, an image is recorded of a subject to which a wearable computing device is secured. Responsive to the recording, the imaging device wirelessly receives data from the wearable computing device. The data includes an identifier of the wearable computing device and biometric data of the subject. The identifier and biometric data are embedded as metadata in the recorded image. 120-. (canceled)21. A method implemented by an imaging device , comprising:recording an image of a subject to which a wearable computing device is secured;responsive to the recording, wirelessly receiving data from the wearable computing device, the data including an identifier of the wearable computing device and biometric data of the subject; andembedding the identifier and the biometric data as metadata in the recorded image.22. The method of claim 21 , wherein recording the image comprises transmitting a request for biometric data to the wearable computing device.23. The method of claim 21 , wherein the biometric data includes an address of a biometric data server from which biometric data of the subject can be retrieved based on the device identifier.24. The method of claim 21 , wherein the biometric data includes one or more sensor readings recorded by the wearable computing device.25. The method of claim 21 , wherein the biometric data includes one or more mood indicators derived from sensor readings recorded by the wearable computing device.26. The method claim 21 , further comprising:estimating a position of the subject within the recorded image; andembedding an indication of the position as metadata in the recorded image.27. The method of claim 26 , wherein estimating a position of the subject within the recorded image comprises:receiving a reference image of the subject's face from the wearable computing device; andperforming a facial recognition algorithm to determine the position of the subject within the recorded image ...

Подробнее
16-02-2017 дата публикации

Electronic device, controlling method and storage medium

Номер: US20170045934A1
Автор: Yen-Hsing Lee
Принадлежит: Fih Hong Kong Ltd

A method for controlling an electronic device includes activating a camera to acquire an image of a user of the electronic device. A distance from the user to the electronic device is acquired. A distance range of the acquired distance is obtained by searching the mapping table, and a value is calculated based on the image of the user. When the user is determined to be a specific type based on the calculated value and the predetermined value corresponding to the determined distance range, a first function is executed, and a second function is executed when the user is determined not to be the specific type.

Подробнее
15-02-2018 дата публикации

DISPLAY CONTROL APPARATUS, DISPLAY CONTROL METHOD, AND DISPLAY CONTROL PROGRAM

Номер: US20180046263A1
Автор: Noda Takuro, Shigeta Osamu
Принадлежит: SONY CORPORATION

A display control apparatus includes a recognizing unit configured to recognize a position of an operator and a position of a hand or the like of the operator, a calculating unit configured to regard a position of the operator in a screen coordinate system set on a screen as an origin of an operator coordinate system and multiply a position of the hand or the like with respect to the origin of the operator coordinate system by a predetermined function, thereby calculating a position of display information corresponding to the hand or the like in the screen coordinate system, and a control unit configured to cause the display information to be displayed at the position in the screen coordinate system calculated by the calculating unit. 1a recognizing unit configured to recognize a position of an operator and a position of operation means of the operator;a calculating unit configured to regard a position of the operator in a screen coordinate system set on a screen as an origin of an operator coordinate system and multiply a position of the operation means with respect to the origin of the operator coordinate system by a predetermined function, thereby calculating a position of display information corresponding to the operation means in the screen coordinate system; anda control unit configured to cause the display information to be displayed at the position in the screen coordinate system calculated by the calculating unit.. A display control apparatus comprising: The present application is a continuation of U.S. application Ser. No. 15/095,308, filed Apr. 11, 2016 which is a continuation of U.S. application Ser. No. 12/806,966, filed on Aug. 25, 2010, issued as U.S. Pat. No. 9,342,142 which claims priority from Japanese Patent Application No. JP 2009-204958 filed in the Japanese Patent Office on Sep. 4, 2009, the entire content of which is incorporated herein by reference.The present invention relates to a display control apparatus, a display control method, and a ...

Подробнее
25-02-2016 дата публикации

Providing Subject Information Regarding Upcoming Images On A Display

Номер: US20160055379A1
Автор: Svendsen Hugh Blake
Принадлежит: Ikorongo Technology, LLC.

Methods are described for presenting in an user interface information regarding the subject faces that will appear in upcoming images. In general, many of the images available to display will be comprised of images containing subject faces. Based on a subject affinity score between the viewer and the subjects, an image affinity is computed. Based on the image affinity scores, images are selected for presentation on the display. As images are displayed, the system analyzes one or more upcoming images to determine information to display. As each image is displayed, subject information comprising the subject's face is presented in an area of the display adjacent to the current image. 1. A device comprising: select, by the device, a sequence of images;', 'select, by the device, one or more images from the sequence of images that are to be presented subsequent to a current image to form one or more analysis window images; and', 'determine updated subject information from the one or more analysis window images, the updated subject information comprising subject representations of those subjects appearing in the one or more analysis window images; and, 'a control system comprising a processor and a memory storing program codes operable to effect presentation of the current image from the sequence of images in a first area of a display, wherein only the current image from the sequence of images is presented during a presentation period; and', 'effect presentation of the updated subject information in a second area of the display., 'electronic video circuitry, coupled to the control system, operable to2. The device of further comprising: communicate with an internet image source over the Internet; and', 'communicate with a private network image source over a local area network., 'a network transceiver, coupled to the control system, operable to3. The device of further comprising: determine, based on timing information, that it is time to present a next image from the ...

Подробнее
03-03-2022 дата публикации

VIDEO PROGRAM PLAYING DEVICE AND VIDEO PROGRAM SHIELDING METHOD THEREOF

Номер: US20220070516A1
Автор: Zhang Hui-Dong
Принадлежит: Realtek Semiconductor Corp.

A video program playing device implements a video program shielding method. The video program shielding method includes receiving a video program signal and predetermined age information, and obtaining video program information from the video program signal; determining, according to the video program information, whether the video program signal matches the predetermined age information; determining whether at least one target meeting the predetermined age information is detected within a predetermined range of a video program playing device; and stopping outputting the video program signal when the video program signal fails to match the predetermined age information and the at least one target is detected.

Подробнее
10-03-2022 дата публикации

Alcolock device and system

Номер: US20220073079A1
Автор: Anders Nilsson, Haibo Li
Принадлежит: GAZELOCK AB

The present disclosure relates to an ignition interlock device and system for accurately detecting a drunk driver by running an interactive visual test presented for the driver to visualize on a screen of the alcolock device, which further comprises at least an eye gaze tracking module for recording eye movements and measuring gaze data, and a motor skill computing module for computing motion parameters from the sensor data measured during the interactive visual test. The alcolock device may further comprise a drunk detection module for measuring drunkenness of the driver by mapping gaze parameters and motion parameters to measure the mismatch between motor skills and cognitive processing performance, and a decision module for allowing the driver to drive the vehicle, or not, based on the measured drunkenness.

Подробнее
20-02-2020 дата публикации

UNLOCKING METHOD FOR ELECTRONIC CIGARETTE, UNLOCKING SYSTEM, STORAGE MEDIUM AND UNLOCKING DEVICE

Номер: US20200054071A1
Автор: OUYANG Junwei
Принадлежит: SHENZHEN IVPS TECHNOLOGY CO., LTD.

The present invention discloses an unlocking method for electronic cigarette based on face recognition, comprising the steps of: acquiring user image information, extracting facial feature information in user image information, analyzing the age of the user according to the facial feature information; determining whether the age of the user is within a preset age range; and if the user is within a preset age range, generating an instruction of unlocking an electronic cigarette. The present invention further discloses a face recognition-based electronic cigarette unlocking system, a mobile terminal, a computer readable storage medium, an electronic cigarette and a computer readable storage device. The present invention effectively improves the fun of the control method of the electronic cigarette. At the same time, the inadequacies that the current control method of electronic cigarettes is original and the use of electronic cigarettes cannot be limited for minors are solved. 1. An unlocking method for an electronic cigarette based on face recognition , the unlocking method comprising the steps of:acquiring image information of a user, extracting facial feature information from the image information of the user, analyzing an age of the user according to the facial feature information;determining whether the age of the user is within a preset age range; andif the user is within the preset age range, generating an instruction of unlocking the electronic cigarette,wherein:it is confirmed whether the age of the user is within a preset age range;if the age of the user is not within a preset age range, it is confirmed whether to switch the password verification mode;if switched to the password verification mode, the user password information is acquired, and the user password information is matched with a preset password database to confirm whether there is preset password information matching the user password information in the preset password database; andif there is ...

Подробнее
21-02-2019 дата публикации

MULTIMEDIA FOCALIZATION

Номер: US20190057150A1
Автор: An Eunsook
Принадлежит:

Example implementations are directed to methods and systems for individualized multimedia navigation and control including receiving metadata for a piece of digital content, where the metadata comprises a primary image and text that is used to describes the digital content; analyzing the primary image to detect one or more objects; selecting one or more secondary images corresponding to each detected object; and generating a data structure for the digital content comprising the one or more secondary images, where the digital content is described by a preferred secondary image. 1. A method comprising:receiving metadata for a piece of digital content, wherein the metadata comprises a primary image and text that is used to describe the piece of digital content;analyzing the primary image to detect one or more objects;selecting one or more secondary images corresponding to each detected object; andgenerating a data structure for the piece of digital content comprising the one or more secondary images, wherein, in response to a user request, the piece of digital content is to be described by a preferred secondary image.2. The method of claim 1 , wherein the preferred secondary image is to be determined based on at least a user preference.3. The method of claim 1 , further comprising:determining a label for each secondary image based at least on the text information, wherein the data structure includes the labels, wherein the preferred secondary image is to be determined based on at least the label associated with the preferred secondary image and a user preference.4. The method of claim 3 , further comprising:receiving a request to describe the piece of digital content;receiving a set of user information;in response to the data structure comprising a label corresponding to a user preference of the set of user information, presenting the secondary image for the label as the preferred secondary image to describe the piece of digital content.5. The method of claim 1 , ...

Подробнее
03-03-2016 дата публикации

IMAGE PROCESSING APPARATUS FOR FACIAL RECOGNITION

Номер: US20160063314A1
Автор: Samet Shai
Принадлежит: SAMET PRIVACY, LLC

An apparatus including an image capture device including a lens, a shutter, an image sensor, and an aperture is provided. The image capture device receives, via the lens, a plurality of images. The apparatus further includes a display, a memory, a receiver, a transmitter, and a processor. The receiver receives facial recognition data. The transmitter transmits an instruction to capture a series of images via the image capture device. The series of images may include a randomly generated pose. The apparatus further includes a processor to analyze the facial recognition data to determine an estimated age of a user. 1. An information processing apparatus comprising:an image capture device for capturing a plurality of images of a user;a memory storing at least one data array including pupil distance in correspondence with age information;a processor configured to generate a random facial pose;a transmitter configured to transmit instructions to a user, including an instruction to pose for the image capture device in a first facial configuration for capturing a first image and to pose in a second facial configuration for capturing a second image, wherein the first facial configuration is different than the second facial configuration, and one of the first and second facial configurations corresponds to the randomly generated facial pose;a receiver configured to receive facial image information based on the plurality of images captured by the image capture apparatus, said facial image information comprising:the first image of the user;the second image of the user; andone or more feature inputs corresponding to one or more facial features, wherein at least one of the one or more of the feature inputs corresponds to an age-indicator;wherein the processor determines, based on the facial image information, one or more of:a positive detection of a full image of the user's face in the first and second images;whether the pose of the user in first image corresponds to the first ...

Подробнее
02-03-2017 дата публикации

LARGE FORMAT DISPLAY APPARATUS AND CONTROL METHOD THEREOF

Номер: US20170060319A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A large format display (LFD) apparatus is provided. The LFD apparatus includes a display configured to display a content comprising at least one content element, a sensor configured to detect at least one user, and a processor configured to estimate a height of the at least one user detected by the sensor, and change a display location of the at least one content element on a screen of the display based on the estimated height. 1. A display apparatus comprising:a display configured to display a content comprising at least one content element;a sensor configured to detect at least one user; anda processor configured to estimate a height of the at least one user based on information collected by the sensor, and change a display location of the at least one content element on a screen of the display based on the estimated height.2. The display apparatus of claim 1 , wherein the sensor comprises at least one camera claim 1 , andwherein the processor is configured to estimate the height of the at least one user based on at least one image of the at least one user, the at least one image being photographed by the at least one camera.3. The display apparatus of claim 1 , wherein the processor is configured to determine a field of view of the at least one user based on the estimated height and divide the screen into a plurality of display areas for displaying the at least one content element according to the field of view of the at least one user in response to the at least one user being detected.4. The display apparatus of claim 3 , wherein the content comprises a plurality of content elements claim 3 , andwherein the processor is configured to display the plurality of content elements on the plurality of display areas based on a priority order of the plurality of content elements.5. The display apparatus of claim 1 , wherein the processor is configured to activate a touch recognition function on a first display area of the screen and inactivate the touch recognition ...

Подробнее
01-03-2018 дата публикации

SYSTEMS AND METHODS FOR PROCESSING MEDIA CONTENT THAT DEPICT OBJECTS

Номер: US20180060659A1
Принадлежит:

Access to a set of media content items is acquirable. Identified processors can perform, in parallel, object detection for the set. In some cases, information about a current system state, a user, and/or object popularity metrics is acquirable for selecting a subset of object models. Object recognition is performable, based on the subset, for the set of media content items. In some instances, a camera view can be provided. Object recognition is performable for representations of the view. An object depicted in the representations is identifiable. An interface portion is presentable to provide a label for the object. In some cases, object recognition is performable for the set of media content items to identify an object depicted in a content item. A label is associable with the content item. A search through the set of media content items can identify, based on the label, a subset that depicts the object. 1. A computer-implemented method comprising:providing, by a computing system, a live camera view for a camera of the computing system;performing, by the computing system, as one or more background processes, object recognition with respect to one or more representations of the live camera view;identifying, by the computing system, at least one object depicted in the one or more representations of the live camera view; andpresenting, by the computing system, an interface portion overlaying the live camera view, the interface portion providing at least one label for the at least one object.2. The computer-implemented method of claim 1 , further comprising:receiving a first command to acquire a first image represented via the live camera view at a first time, wherein the at least one object is depicted in the first image, wherein the at least one object includes a first face associated with a first recognized user, and wherein the at least one label includes a first identifier for the first recognized user;storing the at least one label as metadata for the first image ...

Подробнее
20-02-2020 дата публикации

FACIAL ATTRIBUTE RECOGNITION METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Номер: US20200057883A1

A face attribute recognition method, electronic device, and storage medium. The method may include obtaining a face image, inputting the face image into an attribute recognition model, performing a forward calculation on the face image using the attribute recognition model to obtain a plurality of attribute values according to different types of attributes, and outputting the plurality of attribute values, the plurality of attribute values indicating recognition results of a plurality of attributes of the face image. The attribute recognition model may be obtained through training based on a plurality of sample face images, a plurality of sample attribute recognition results of the plurality of sample face images, and the different types of attributes. 1. A face attribute recognition method , performed by an electronic device , the method comprising:obtaining a face image;inputting the face image into an attribute recognition model;performing a forward calculation on the face image using the attribute recognition model to obtain a plurality of attribute values according to different types of attributes; andoutputting the plurality of attribute values, the plurality of attribute values indicating recognition results of a plurality of attributes of the face image, andwherein the attribute recognition model is obtained through training based on a plurality of sample face images, a plurality of sample attribute recognition results of the plurality of sample face images, and the different types of attributes.2. The method according to claim 1 , wherein the performing the forward calculation on the face image using the attribute recognition model further comprises:extracting face features of the face image;performing the forward calculation on the face features according to sub-models corresponding to the different types of attributes to obtain feature values corresponding to the plurality of attributes of the face image;based on a first attribute being a regression ...

Подробнее
20-02-2020 дата публикации

METHOD FOR SELLING ELECTRONIC CIGARETTE, TERMINAL, STORAGE MEDIUM AND ELECTRONIC CIGARETTE

Номер: US20200058057A1
Автор: OUYANG Junwei
Принадлежит: SHENZHEN IVPS TECHNOLOGY CO., LTD.

The present invention discloses a selling method for an electronic cigarette based on face recognition, comprising the steps of: acquiring user image information, extracting facial feature information in user image information, analyzing the age of the user according to the facial feature information; determining whether the age of the user is within a preset age range; and if the user is within a preset age range, generating an instruction of approving selling an electronic cigarette. The present invention further discloses a selling terminal for an electronic cigarette based on face recognition, and a computer readable storage medium. The present invention effectively solves the inadequacies that the current selling method for an electronic cigarette is original and the use of electronic cigarettes cannot be limited for minors. 1. A selling method for an electronic cigarette based on face recognition , the selling method comprising the steps of:acquiring image information of a user, extracting facial feature information from the image information of the user, analyzing an age of the user according to the facial feature information;determining whether the age of the user is within a preset age range; andif the user is within the preset age range, generating an instruction of approving selling the electronic cigarette.2. The selling method for an electronic cigarette according to claim 1 , wherein the step of extracting facial feature information in user image information claim 1 , analyzing the age of the user according to the facial feature information claim 1 , and determining whether the age of the user is within a preset age range further comprises:if the age of the user is not within a preset age range, recording the current number of times that user image information is acquired;determining whether the number of times is greater than a preset threshold; andif the number of times is greater than a preset threshold, limiting the operation of acquiring user ...

Подробнее
02-03-2017 дата публикации

DETECTION DEVICE, DETECTION METHOD, COMPUTER PROGRAM PRODUCT, AND INFORMATION PROCESSING SYSTEM

Номер: US20170061203A1
Принадлежит:

According to an embodiment, a detection device includes a camera, a memory, and processor circuitry. The camera is connected to an internal bus and configured to acquire an image including an area in which a mobile body is movable. The memory is connected to the internal bus and configured to store data and a program. The processor circuitry is connected to the internal bus and configured to detect at least the area, a mark on the area, and a person, from the image, calculate a first distance between the person and a position of the device when the person is in the area, and set a range according to a result of the detection and the first distance. 1. A detection device comprising: acquire an image including an area in which a mobile body is movable from a camera connected to the processor circuitry;', 'detect at least the area, a mark on the area, and a person, from the image;', 'calculate a first distance between the person and a position of the device when the person is in the area; and', 'set a range according to a result of the detection and the first distance., 'processor circuitry configured to2. The device according to claim 1 , whereinthe processor circuitry is configured to detect a crosswalk mark, as the mark, from the image, set the range based on the crosswalk mark and the first distance, and reduce the set range as a second distance between the crosswalk mark and the person is reduced.3. The according to claim 1 , wherein the processor circuitry is further configured to estimate an age of the person detected claim 1 , andincrease the set range, when the age is not more than a first threshold or not less than a second threshold larger than the first threshold.4. The device according to claim 1 , whereinwherein the processor circuitry is further configured to detect at least one vehicle from the image,recognize a vehicle being stopped of the at least one vehicle detected, as a stopped vehicle, set the range based on the crosswalk mark and the first ...

Подробнее
28-02-2019 дата публикации

SKIN AGING STATE ASSESSMENT METHOD AND ELECTRONIC DEVICE

Номер: US20190059806A1
Принадлежит: CAL-COMP BIG DATA, INC.

The invention provides a skin aging state assessment method and an electronic device. The method includes: acquiring a first image and a second image; acquiring a characteristic parameter of the first image and a characteristic parameter of the second image; acquiring an aging parameter according to the characteristic parameter of the first image and the characteristic parameter of the second image; and deciding an aging assessment result corresponding to the first image according to the aging parameter. The skin state detection method of the invention makes it possible to use two face images captured at different times to acquire the skin aging condition of the face images. 1. A skin aging state assessment method , comprising:acquiring a first image and a second image;acquiring a characteristic parameter of the first image and a characteristic parameter of the second image;acquiring an aging parameter according to the characteristic parameter of the first image and the characteristic parameter of the second image; anddeciding an aging assessment result corresponding to the first image according to the aging parameter.2. The skin aging state assessment method according to claim 1 , wherein the characteristic parameter of the first image and the characteristic parameter of the second image respectively comprise a wrinkle ratio claim 1 , an age spot ratio claim 1 , and a sag parameter claim 1 , wherein acquiring the aging parameter according to the characteristic parameter of the first image and the characteristic parameter of the second image comprises:calculating a difference between the wrinkle ratio of the first image and the wrinkle ratio of the second image to acquire a wrinkle difference parameter;calculating a difference between the age spot ratio of the first image and the age spot ratio of the second image to acquire an age spot difference parameter;calculating a difference between the sag parameter of the first image and the sag parameter of the second ...

Подробнее
04-03-2021 дата публикации

AGE RECOGNITION METHOD, STORAGE MEDIUM AND ELECTRONIC DEVICE

Номер: US20210064851A1
Автор: Zhang Huanhuan
Принадлежит:

The present disclosure provides an age recognition method, a non-transitory computer-readable storage medium, and an electronic device. The method includes extracting feature points of a face in an image, preprocessing the image to extract global features of the face, extracting local features of the face based on the feature points, determining an age feature of the face according to the global features and the local features, and inputting the age feature into a pre-trained age recognition model to obtain an age value corresponding to the face in the image. 1. An age recognition method , comprising:extracting feature points of a face in an image and preprocessing the image to extract global features of the face;extracting local features of the face based on the feature points and determining an age feature of the face according to the global features and the local features; andinputting the age feature into a pre-trained age recognition model to obtain an age value corresponding to the face in the image.2. The age recognition method according to claim 1 , wherein extracting local features of the face based on the feature points comprises:performing size transformation on the face according to the feature points to determine regions of interest of the face; anddetermining target regions in the regions of interest, and extracting target features of the target regions to determine the local features of the face.3. The age recognition method according to claim 2 , wherein:the target regions comprise first target regions and at least one second target region, and the target features comprise first target features and second target features; and determining first target region pairs in the regions of interest, wherein each of the first target region pairs comprises two first target regions which have a symmetrical relationship with each other, and obtaining the first target features of each first target region pair;', 'determining at least one second target region in ...

Подробнее
28-02-2019 дата публикации

DISPLAY CONTROL APPARATUS AND METHOD FOR ESTIMATING ATTRIBUTE OF A USER BASED ON THE SPEED OF AN INPUT GESTURE

Номер: US20190065049A1
Принадлежит:

A display control apparatus comprising circuitry configured to obtain information of an input speed of a gesture input of at least one user from a sensor configured to detect a hand of the user, estimate an attribute of the user based on the input speed, control a display apparatus to display a layout image for the gesture input based on the estimated attribute of the user. 1. An apparatus comprising obtain a gesture input of at least one user;', 'estimate an attribute of the user based on the obtained gesture input and an image of the user; and', 'generate a layout image based on the estimated attribute of the user., 'circuitry configured to220-. (canceled) This application claims the benefit of Japanese Priority Patent Application JP 2013-004578 filed Jan. 15, 2013, the entire contents of which are incorporated herein by reference.The present disclosure relates to an input apparatus, an output apparatus, and a storage medium.Regarding control of an information processing apparatus according to a user environment, for example, JP 2009-536415T discloses a system and method for managing, route setting, and controlling connections among apparatuses arranged within the environment in order to express a specific desired state.Further, in recent years, information processing apparatuses having various characteristics have been developed. For example, a utilization method has been proposed that utilizes the characteristics of a plurality of apparatuses, such as a laptop PC (personal computer) and a tablet PC, by connecting the apparatuses to each other and enabling one of the apparatuses to be controlled by another of the apparatuses. For example, specifically, JP 2012-108658A discloses a computer system that displays the same image on a plurality of computers by simultaneously processing the same content on the plurality of computers.In addition, regarding control of an output device such as a printer or a video camera, JP 2002-281277A discloses a network system capable ...

Подробнее
17-03-2022 дата публикации

FACE IMAGE PROCESSING METHODS AND APPARATUSES, AND ELECTRONIC DEVICES

Номер: US20220083763A1
Принадлежит: SHENZHEN SENSETIME TECHNOLOGY CO., LTD.

A face image processing method includes: performing face detection on an image to be processed, and obtaining at least one face region image included in the image to be processed and face attribute information in the at least one face region image; and for the at least one face region image, processing an image corresponding to a first region and/or an image corresponding to a second region in the face region image at least according to the face attribute information in the face region image, wherein the first region is a skin region, and the second region includes at least a non-skin region. 1. A face image processing method , comprising:performing face detection on an image to be processed, and obtaining at least one face region image comprised in the image to be processed and face attribute information in the at least one face region image;{'claim-text': ['responsive to the face attribute information comprising face attachment information, determining a facial processing parameter according to the face attachment information, wherein the determined facial processing parameter fails to comprise a processing parameter of a facial specific part occluded by a facial attachment in a facial region image; or', 'responsive to the face attribute information comprising facial angle information, determining a facial processing parameter corresponding to a face angle of the face region image indicated by the facial angle information, wherein different face angles correspond to different facial processing parameters; and'], '#text': 'for the at least one face region image,'}processing, at least according to the facial processing parameter, at least one of an image corresponding to a first region in the face region image or an image corresponding to a second region in the face region image, wherein the first region is a skin region, and the second region comprises at least a non-skin region.2. The method according to claim 1 , whereinthe method further comprises: obtaining ...

Подробнее
10-03-2016 дата публикации

ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY ADJUSTING DISPLAY RATIO OF USER INTERFACE

Номер: US20160070340A1
Принадлежит:

An electronic device and a method for automatically adjusting display ratio of user interface are provided. The electronic device includes storage device, a proximity sensor and a touch screen. The method includes the following blocks. The proximity sensor is controlled to measure a distance between the display screen and a user. The distance measured by the proximity sensor is obtained. A display ratio of the user interface is calculated according to the obtained distance and a algorithm which is pre-stored in the storage device. The display ratio of the user interface is adjusted using the calculated display ratio. 1. An electronic device comprising:a display screen;at least one processor coupled to the display screen;a proximity sensor coupled to the at least one processor; anda non-transitory storage device storing an algorithm which defines a relationship between display ratios of the user interface and the distance between the display screen and a user; andthe storage device further storing one or more programs which, when executed by the at least one processor, cause the at least one processor to:control the proximity sensor to measure distance between the display screen and the user;obtain the distance measured by the proximity sensor;calculate a display ratio of the user interface according to the obtained distance and the algorithm; andadjust the display ratio of the user interface using the calculated display ratio.2. The electronic device according to claim 1 , wherein the storage device is further configured to pre-store a normal display ratio range claim 1 , and the at least one processor further:determines whether the calculated display ratio falls into the normal display ratio range;if the calculated display ratio does not fall in the normal display ratio range, determines whether the calculated display ratio is greater than a maximum value of the normal display ratio range; andif the calculated display ratio is greater than the maximum value, ...

Подробнее
28-02-2019 дата публикации

SYSTEM FOR EXECUTION OF MULTIPLE EVENTS BASED ON IMAGE DATA EXTRACTION AND EVALUATION

Номер: US20190065832A1
Принадлежит:

Embodiments of the present invention provide a system for executing multiple events in response to receiving an image and extracting identity and contact information from that image. As such, a facial recognition and image hashing process is applied to an image of multiple individuals associated with the multiple events to extract image hashes for each individual. These image hashes are then compared to known, stored image hashes to determine an identity and contact information for each individual. Once this information is collected, the system executes the multiple events based on the determined information about each individual. 1. A system for execution of multiple events based on image data extraction and evaluation , the system comprising:a memory device; and receive, from a computing device of a user, a prompt to request a contribution from one or more individuals;', 'receive, from the computing device of the user, an image that includes the one or more individuals;', 'determine an identity for each of the one or more individuals in the image by applying a facial recognition process to the image to extract image hashes for each face in the image and comparing the extracted image hashes to stored hashes of the computing device of the user or to stored hashes of a social network of the user;', 'identify contact information for each of the one or more individuals based on the identity of each of the one or more individuals; and', 'transmit the request for the contribution to each individual of the one or more individuals using the contact information., 'a processing device operatively coupled to the memory device, wherein the processing device is configured to execute computer-readable program code to2. The system of claim 1 , wherein the processing device is further configured to execute computer-readable program code to:determine that a first extracted image hash of the extracted image hashes is not associated with the stored hashes of the computing device of ...

Подробнее
10-03-2016 дата публикации

IDENTIFICATION APPARATUS AND METHOD FOR CONTROLLING IDENTIFICATION APPARATUS

Номер: US20160070987A1
Принадлежит: Omron Corporation

An identification apparatus performs classification using a plurality of classifiers, and calculates the reliability of its classification result. A data obtaining unit obtains input data. A feature quantity obtaining unit obtains a feature quantity corresponding to the input data. A plurality of classifiers receive input of the feature quantity and perform classification based on the input feature quantity. An identification unit inputs the feature quantity into each of the classifiers, and generates a single second classification result based on a plurality of classification results obtained from the classifiers. A reliability generation unit generates a reliability of the second classification result based on variations across the plurality of classification results. 1. An identification apparatus , comprising:a data obtaining unit that obtains input data;a feature quantity obtaining unit that obtains a feature quantity corresponding to the input data;a plurality of classifiers that receive input of the feature quantity and perform classification based on the input feature quantity;an identification unit that inputs the feature quantity into each of the classifiers, and generates a single second classification result based on a plurality of classification results obtained from the classifiers; anda reliability generation unit that generates a reliability of the second classification result based on variations across the plurality of classification results.2. The identification apparatus according to claim 1 , wherein each classifier is a multiclass classifier claim 1 , and outputs a class value corresponding to a class indicated by a classification result obtained by the classifier.3. The identification apparatus according to claim 2 , wherein the reliability generation unit generates the reliability of the second classification result based on a variance or a standard deviation of class values output from the classifiers.4. The identification apparatus according ...

Подробнее
27-02-2020 дата публикации

AUTOMATED RELATIONSHIP CATEGORIZER AND VISUALIZER

Номер: US20200065567A1
Принадлежит:

Aspects differentially drive the graphic display of links overlaid between people in a group photograph as a function of associated relationship type. Individuals are distinguished within the photograph and relationships are identified among the individuals that differ from one another with respect to type of relationship, by extracting relationship data via natural language processing relative to social network data of one or more of the identified individuals. Thus, a graphic display device is driven to display the identified relationships via each of different overlay elements that are depicted directly on the photograph, link respective ones of the identified individuals, and use different graphic elements to convey differences in respective types of the relationships that are determined among the identified individuals. 1. A computer-implemented method , comprising executing on a computer processor:identifying each of a plurality of different individuals that are each visible within a photograph and distinguished from other ones of the visible individuals;identifying relationships among the identified individuals by extracting relationship data via natural language processing relative to social network data of at least one of the identified individuals;in response to determining that a plurality of the identified relationships differ from one another with respect to a type of relationship, driving a graphic display device to display the identified plurality of relationships via each of a plurality of different overlay elements that link respective ones of the identified individuals within the photograph and each comprise labels indicating respective ones of the types of relationship identified for the linked individuals, wherein each of the different overlay elements use different graphic elements with respect to others of the different overlay elements to convey differences in the respective ones of the types of the relationships that are determined among the ...

Подробнее
11-03-2021 дата публикации

MULTIMEDIA FOCALIZATION

Номер: US20210073277A1
Автор: An Eunsook
Принадлежит: OpenTV, Inc.

Example implementations are directed to methods and systems for individualized multimedia navigation and control including receiving metadata for a piece of digital content, where the metadata comprises a primary image and text that is used to describes the digital content; analyzing the primary image to detect one or more objects; selecting one or more secondary images corresponding to each detected object; and generating a data structure for the digital content comprising the one or more secondary images, where the digital content is described by a preferred secondary image. 1. (canceled)2. A method for identifying an individual in a stream of media content , the method comprising:analyzing a stream of media content to identify at least two objects;performing facial recognition on the at least two objects to identify a first individual represented in the stream of media content;based at least in part on user information and the facial recognition result, associating a first menu with the first individual represented in the stream of media content;performing facial recognition on the at least two objects to determine that a second individual represented in the stream of media content is not the first individual;based at least in part on the user information and the facial recognition result, associating a second menu with the second individual represented in the stream of media content; anddisplaying the stream of media content, including at least one of the first menu or the second menu.3. The method of claim 2 , further comprising associating the first menu with the first individual based at least on user interests.4. The method of claim 2 , wherein the first menu includes an image from a database.5. The method of claim 2 , wherein the first menu includes a label.6. The method of claim 2 , wherein the stream of media content is video content.7. The method of claim 2 , wherein performing facial recognition is performed locally on the same device that displays the ...

Подробнее
11-03-2021 дата публикации

Method Apparatus and System for Generating a Neural Network and Storage Medium Storing Instructions

Номер: US20210073590A1
Принадлежит:

The present disclosure includes a method, apparatus and system for generating a neural network and a non-transitory computer readable storage medium storing instructions. The method comprises: recognizing at least an attribute of an object in a sample image according to a feature extracted from the sample image, using the neural network; determining a loss function value at least according to a margin value determined based on a semantic relationship between attributes, wherein the semantic relationship is obtained from a predefined table at least according to a real attribute and the recognized attribute of the object, wherein the predefined table is composed of the attributes and the semantic relationship between the attributes; updating a parameter in the neural network according to the determined loss function value. When using the neural network generated according the present disclosure, the accuracy of object attribute recognition can be improved. 1. A method for generating a neural network for recognizing an attribute of an object , comprising:recognizing at least an attribute of an object in a sample image according to a feature extracted from the sample image, using the neural network;determining a loss function value at least according to a margin value determined based on a semantic relationship between attributes, wherein the semantic relationship is obtained from a predefined table at least according to a real attribute and the recognized attribute of the object, and wherein the predefined table is composed of the attributes and the semantic relationship between the attributes; andupdating a parameter in the neural network according to the determined loss function value.2. The method according to claim 1 , wherein the semantic relationship between the attributes in the predefined table represents an order relationship between the attributes or a similarity relationship between the attributes.3. The method according to claim 2 , wherein the semantic ...

Подробнее
24-03-2022 дата публикации

PLAYING SOUND ADJUSTMENT METHOD AND SOUND PLAYING SYSTEM

Номер: US20220091811A1
Принадлежит:

A method for adjusting playing sound in a sound playing system is disclosed. The sound playing system includes a camera module, a facial analysis module, a sound inputting module, a sound adjustment module, and a speaker. The method includes: obtaining an image of a face of each one of a plurality of people through the camera module; analyzing the image to estimate the age of each one of the plurality of people through the facial analysis module, and obtaining a target age according to the age of each person; obtaining a playing sound through the sound inputting module; adjusting the playing sound according to the target age through the sound adjustment module to obtain an output sound; and playing the output sound through the speaker. 1. A method for adjusting playing sound , applied to a sound playing system , wherein the sound playing system comprises a camera module , a facial analysis module , a sound inputting module , a sound adjustment module , and a speaker , the method comprising:obtaining an image of a face of each one of a plurality of people through the camera module;analyzing the image to estimate an age of each one of the plurality of people through the facial analysis module, andobtaining a target age according to the age of each person;obtaining a playing sound through the sound inputting module;adjusting the playing sound according to the target age through the sound adjustment module to obtain an output sound; andplaying the output sound through the speaker.2. The method for adjusting playing sound as claimed in claim 1 , wherein the sound adjustment module selects an equalizer target setting according to the target age to process the playing sound to obtain the output sound.3. The method for adjusting playing sound as claimed in claim 1 , wherein the sound adjustment module performs a frequency reduction adjustment by processing the playing sound according to the target age to obtain the output sound.4. The method for adjusting playing sound as ...

Подробнее
16-03-2017 дата публикации

SYSTEMS AND METHODS FOR LARGE SCALE FACE IDENTIFICATION AND VERIFICATION

Номер: US20170076143A1
Принадлежит:

Methods and systems for large-scale face recognition. The system includes an electronic processor to receive at least one image of a subject of interest and apply at least one subspace model as a splitting binary decision function on the at least one image of the subject of interest. The electronic processor is further configured to generate at least one binary code from the at least one splitting binary decision function. The electronic processor is further configured to apply a code aggregation model to combine the at least one binary codes generated by the at least one subspace model. The electronic processor is further configured to generate an aggregated binary code from the code aggregation model and use the aggregated binary code to provide a hashing scheme. 1. A method of large-scale face representation comprising:receiving, with an electronic processor, at least one image of a subject of interest;applying, with the electronic processor, at least one subspace model as a splitting binary decision function on the at least one image of the subject of interest;generating, with the electronic processor, at least one binary code from the at least one splitting binary decision function;applying, with the electronic processor, a code aggregation model to combine the at least one binary codes generated by the at least one subspace model;generating, with the electronic processor, an aggregated binary code from the code aggregation model; andusing the aggregated binary code to provide a hashing scheme.2. The method of claim 1 , further comprising executing the hashing scheme and performing a face recognition.3. The method of claim 1 , further comprising executing the hashing scheme and performing a face verification.4. The method of claim 1 , wherein applying claim 1 , with the electronic image processor claim 1 , at least one subspace model includes applying at least one subspace model using dictionary learning.5. The method of claim 1 , further comprising using the ...

Подробнее
05-03-2020 дата публикации

METHODS AND APPARATUS FOR REDUCING FALSE POSITIVES IN FACIAL RECOGNITION

Номер: US20200074151A1
Принадлежит: 15 Seconds of Fame, Inc.

An apparatus can include a memory, a communication interface, and a processor. The processor is configured to receive image data from an imaging device and first contextual data associated with the image data. The image data includes at least one image of a field of view. The processor is also configured to receive second contextual data associated with a user of a user device. The second contextual data is generated in response to the user device receiving a wireless signal sent by an antenna operably coupled to the imaging device. The processor is further configured to determine a potential presence of the user in the image data based on comparing the first contextual data with the second contextual data, analyze the image data to identify the user in the image data, and send the image data to the user. 1. An apparatus , comprising:a memory;a communication interface in communication with the memory and configured to communicate via a network; anda processor in communication with the memory and the communication interface, the processor configured to receive image data via the network and the communication interface from an imaging device and first contextual data associated with the image data, the image data including at least one image of a field of view,the processor configured to receive, via the network and the communication interface, second contextual data associated with a user of a user device, the second contextual data being generated in response to the user device receiving a wireless signal (1) sent, in response to the imaging device generating the image data, by an antenna operably coupled to the imaging device and (2) covering at least a portion of the field of view, determine a potential presence of the user in the image data based on comparing the first contextual data with the second contextual data,', 'analyze the image data based on at least one of two-dimensional facial recognition analytics, three-dimensional facial recognition analytics, or ...

Подробнее
14-03-2019 дата публикации

BLOOD PRESSURE MEASUREMENT STATE DETERMINATION METHOD, BLOOD PRESSURE MEASUREMENT STATE DETERMINING DEVICE, AND RECORDING MEDIUM

Номер: US20190076064A1
Принадлежит:

A method for determining a blood pressure measurement state, using a device that is held in a hand of a user to whom a blood pressure meter is mounted. The method includes: obtaining (i) image data including a face of the user by a camera that the device has, (ii) first information indicating an inclination angle of the determining device as to the gravitational direction, by an angle sensor that the device has, and (iii) second information indicating a position of the face of the user in the image data, and the proportion of the size of the face of the user in the image data; determining whether or not the user is correctly using the blood pressure meter based on the image data, the first information and the second information; and providing a notification indicating the determination result. 1. A blood pressure measurement state determination method for determining a blood pressure measurement state , using a determining device that is held in a hand of a user to whom a blood pressure meter is mounted , the method comprising:obtaining image data including a face of the user by a camera that the determining device has;obtaining first information indicating an inclination angle of the determining device as to the gravitational direction, by an angle sensor that the determining device has;obtaining second information indicating a position of the face of the user in the image data, and the proportion of the size of the face of the user in the image data;determining whether or not the angle indicated in the first information is within a first range;determining whether or not the position of the face of the user indicated in the second information is within a second range;determining whether or not the proportion of the size of the face of the user indicated in the second information is within a third range;determining whether or not the user is correctly using the blood pressure meter;providing a first notification indicating that the blood pressure meter is being ...

Подробнее
05-03-2020 дата публикации

Generating training data for natural language processing

Номер: US20200074229A1
Автор: Waseem Alshikh
Принадлежит: Qordoba Inc

A training data system enables the generation of training data based on video content received from one or more outside video sources. For example, the generated training data can include a transcript of a word or phrase alongside emotion, language style, and brand perception data associated with that word or phrase. To generate the training data from a video, the subtitles, video frame, metadata, and audio levels of the video can be analyzed by the training data system. The generated training data (potentially from a plurality of videos) can then be grouped into a set of training data and used to train machine learning modules for Natural Language Processing (NLP) techniques.

Подробнее
22-03-2018 дата публикации

Image classification and information retrieval over wireless digital networks and the internet

Номер: US20180082110A1
Принадлежит: Avigilon Patent Holding 1 Corp

A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.

Подробнее
14-03-2019 дата публикации

FACE AUTHENTICATION TO MITIGATE SPOOFING

Номер: US20190080155A1
Принадлежит:

Embodiments provide, in at least one aspect, methods and systems that authenticate at least one face in at least one digital image using techniques to mitigate spoofing. For example, methods and systems trigger an image capture device to capture a sequence images of the user performing the sequence of one or more position requests based on the pitch and yaw movements. The methods and systems generate a series of face signatures for the sequence of images of the user performing the sequence of one or more position requests. The methods and systems compare the generated series of face signatures to stored face signatures corresponding to the requested sequence of the one or more position requests. 1. A security platform comprising:an image capture device configured to detect a user within a field of view of a camera; provide an electronic prompt to request the user to perform a sequence of random poses, each pose defined by a change in pitch, yaw, or roll;', 'trigger the image capture device to capture frames of the user performing the sequence of poses;', 'determine if any frame is missing a face of the user;', 'verify that the user has performed the sequence of poses by comparing landmark values within a threshold;', 'for each frame, generate a face signature of the user performing a pose of the sequence of poses;', 'compare the generated face signatures to stored face signatures corresponding to the poses;', 'verify the identity of the user as being an authorized user; and', 'store the captured frames and generated face signatures in a data storage as an audit trail in an encrypted format., 'a processor configured to2. The security platform of comprising:a display screen configured to display the frames of the user to the user; ["provide a calibration prompt directing the user to come to a resting place in front of the camera with the user's eyes on a horizontal plane;", 'measure at least one key landmark on the face and an overall bounding box of the face and ...

Подробнее
24-03-2016 дата публикации

METHOD AND SYSTEM FOR CONTROLLING USAGE RIGHTS AND USER MODES BASED ON FACE RECOGNITION

Номер: US20160085950A1
Автор: CHEN Xiling
Принадлежит:

A method and system for controlling usage rights and user modes based on face recognition are provided. The method herein includes the steps: recognizing face data of a current user of the mobile terminal by a face-recognition technology; and providing a usage right and/or a user mode suited to an identity of the current user for the current user according to the recognized face data. The chance of misuse is decreased by the present invention. 1. A method for controlling usage rights and user modes of a mobile terminal based on face recognition , comprising:recognizing face data of a current user of the mobile terminal by a face-recognition technology through a camera of the mobile terminal; andproviding a usage right and/or a user mode suited to an identity of the current user for the current user according to the recognized face data;wherein the step of providing the usage right suited to the identity of the current user for the current user according to the recognized face data comprises:acquiring a usage right grade of predetermined functions when the user uses the predetermined functions of the mobile terminal; anddirectly starting the predetermined functions corresponding to the usage right grade of a stranger if the usage right grade of the predetermined functions belongs to the usage right grade of the stranger;wherein the step providing the user mode suited to the identity of the current user for the current user according to the recognized face data comprises:analyzing a face information which is photographed, determining age and sex of the user; andautomatically setting the user mode of the mobile terminal according to the age and sex of the user.2. The method for controlling usage rights and user modes of a mobile terminal based on face recognition according to claim 1 , wherein the step of providing the usage right suited to the identity of the current user for the current user according to the recognized face data further comprises:starting the camera ...

Подробнее
23-03-2017 дата публикации

TEMPLATE SELECTION SYSTEM, TEMPLATE SELECTION METHOD AND RECORDING MEDIUM STORING TEMPLATE SELECTION PROGRAM

Номер: US20170084066A1
Автор: FURUYA Hiroyuki
Принадлежит: FUJIFILM Corporation

Provided are a template selection system, as well as a template selection method, and recording medium storing a template selection program, for selecting a template that will not appear incompatible with a target image when the target image is combined with the template. Specifically, a target image is selected and target image data representing the selected target image is transmitted to an image compositing server. An impression evaluation value of the target image is calculated and templates for which a discrepancy with respect to the calculated impression evaluation value is less than a threshold value are selected. The target image is combined with the templates and image data representing the resulting composite images are transmitted to a smartphone. A desired composite image is selected by the user from among the composite images displayed on the smartphone. 1. A template selection system comprising:a template impression evaluation value storage unit for storing template impression evaluation values with regard to multiple templates;a target image impression evaluation value calculation unit for calculating an impression evaluation value of a target image to be combined with a template; anda template selection unit for selecting templates in order of increasing discrepancy between multiple template impression evaluation values stored in said template impression evaluation value storage unit and a target image impression evaluation value calculated by said target image impression evaluation value calculation unit.2. The system according to claim 1 , wherein said template selection unit selects a template having an impression evaluation value claim 1 , from among the multiple template impression evaluation values stored in said template impression evaluation value storage unit claim 1 , for which the discrepancy with respect to the impression evaluation value of the target image calculated by said target image impression evaluation value calculation unit is ...

Подробнее
23-03-2017 дата публикации

METHOD AND SYSTEM FOR PRIVACY PRESERVING LAVATORY MONITORING

Номер: US20170085839A1
Автор: Valdhorn Dan
Принадлежит:

Privacy preserving methods and apparatuses for capturing and processing optical information are provided. Optical information may be captured by a privacy preserving optical sensor. The optical information may be processed, analyzed, and monitored. Based on the optical information, information and indications may be provided. Such methods and apparatuses may be used in environments where privacy may be a concern, including in a lavatory environment. 1. A system for monitoring lavatories , comprising:at least one optical sensor configured to capture optical information from an environment; and monitor the optical information to determine that a number of people present in the environment equals or exceeds a maximum threshold; and', 'provide an indication to a user based on the determination that the number of people present in the environment equals or exceeds the maximum threshold., 'at least one processing module configured to2. The system of claim 1 , wherein the maximum threshold is one person.3. The system of claim 1 , wherein the maximum threshold is two people.4. The system of claim 1 , wherein the maximum threshold is at least three people.5. The system of claim 1 , wherein the at least one optical sensor is one optical sensor.6. The system of claim 1 , wherein the at least one optical sensor is two optical sensors.7. The system of claim 1 , wherein the at least one optical sensor is at least three optical sensors.8. The system of claim 1 , designed to monitor lavatories in an airplane claim 1 , and wherein the user is a member of an aircrew.9. The system of claim 1 , designed to monitor lavatories in a bus claim 1 , and wherein the user is a bus driver.10. The system of claim 1 , wherein the at least one processing module is further configured to ignore people under a certain age in the determination that the number of people present in the environment equals or exceeds the maximum threshold.11. The system of claim 1 , wherein the at least one processing ...

Подробнее
25-03-2021 дата публикации

Age Recognition Method, Computer Storage Medium and Electronic Device

Номер: US20210089753A1
Автор: Zhang Huanhuan
Принадлежит:

The present invention provides an age recognition method, a computer program and an electronic device. The method comprises: acquiring a face image to be recognized; extracting face characteristic points in the face image to be recognized, and characteristic point coordinates of the face characteristic points in the face image to be recognized; extracting face global features from the face image to be recognized according to the characteristic point coordinates; extracting face local features from the face image to be recognized according to the face characteristic points; and determining an age recognition result corresponding to the face image to be recognized according to the face global features, the face local features and an age recognition model obtained by pre-training. 1. An age recognition method , comprising:acquiring a face image to be recognized;extracting face characteristic points in the face image to be recognized and characteristic point coordinates of the face characteristic points in the face image to be recognized;extracting face global features from the face image to be recognized according to the characteristic point coordinates;extracting face local features from the face image to be recognized according to the face characteristic points; anddetermining an age recognition result corresponding to the face image to be recognized according to the face global features, the face local features and an age recognition model obtained by pre-training.2. The method according to claim 1 , wherein after extracting face characteristic points in the face image to be recognized and characteristic point coordinates of the face characteristic points in the face image to be recognized claim 1 , the method further comprises:denoising the face image to be recognized to obtain a denoised face image;performing a geometric correction process on a face region in the denoised face image according to the characteristic point coordinates, to generate a corrected face ...

Подробнее
29-03-2018 дата публикации

AUTOMATED RELATIONSHIP CATEGORIZER AND VISUALIZER

Номер: US20180089502A1
Принадлежит:

Aspects differentially drive the graphic display of links overlaid between people in a group photograph as a function of associated relationship type. Individuals are distinguished within the photograph and relationships are identified among the individuals that differ from one another with respect to type of relationship, by extracting relationship data via natural language processing relative to social network data of one or more of the identified individuals. Thus, a graphic display device is driven to display the identified relationships via each of different overlay elements that are depicted directly on the photograph, link respective ones of the identified individuals, and use different graphic elements to convey differences in respective types of the relationships that are determined among the identified individuals. 1. A computer-implemented method for differentially driving a graphic display of links overlaid between people in a group photograph as a function of associated relationship type , the method comprising executing on a computer processor the steps of:identifying each of a plurality of different individuals that are each visible within a photograph and distinguished from other ones of the visible individuals;identifying relationships among the identified individuals by extracting relationship data via natural language processing relative to social network data of at least one of the identified individuals; andin response to determining that a plurality of the identified relationships differ from one another with respect to a type of relationship, driving a graphic display device to display the identified plurality of relationships via each of a plurality of different overlay elements that link respective ones of the identified individuals within the photograph and each comprise labels indicating respective ones of the types of relationship identified for the linked individuals, wherein each of the different overlay elements use different graphic ...

Подробнее
30-03-2017 дата публикации

ELECTRONIC DEVICE FOR PROCESSING IMAGE AND CONTROL METHOD THEREOF

Номер: US20170091532A1
Принадлежит:

Disclosed is an electronic device that acquires an image including a first object, identifies a first part of the first object in the image, identifies a second part related to the first part based on a result of the identification of the first part, and performs an operation based on a result of the identification of the second part when the instructions are executed. 1. An electronic device comprising:a processor; anda memory that stores instructions to instruct the processor to acquire an image including a first object, to identify a first part of the first object in the image, to identify a second part of the first object, related to the first part, based on a result of the identification of the first part, and to perform an operation based on a result of the identification of the second part when the instructions are executed.2. The electronic device of claim 1 , wherein the memory further stores instructions to instruct the processor to determine an area to be identified corresponding to the first part and to identify the second part by identifying an object of the area to be identified when the instructions are executed.3. The electronic device of claim 2 , wherein the memory further stores instructions to instruct the processor to compare the object of the area to be identified with a pre-stored database and to identify the second part based on a result of comparison when the instructions are executed.4. The electronic device of claim 1 , wherein the memory further stores instructions to instruct the processor to perform an authentication by using the identification result of the first part and to perform an operation based on the identification result of the second part and the authentication when the instructions are executed.5. The electronic device of claim 1 , wherein the memory further stores instructions to instruct the processor to perform an authentication by using the identification result of the first part and the identification result of the ...

Подробнее
30-03-2017 дата публикации

SYSTEM AND METHOD FOR INTELLIGENTLY INTERACTING WITH USERS BY IDENTIFYING THEIR GENDER AND AGE DETAILS

Номер: US20170092150A1
Принадлежит:

The embodiments herein provide a system and method for intelligently delivering the appropriate content to the right people. It is carried out by automatically identifying the gender and age details from the user images. The system includes a display device and an image processing unit. The image processing unit detects the face area from the captured image and extracts the various face features for the analysis. The image processing unit classifies the user into male or female and estimates the age group based on the wrinkles and other age marks detected. Based on the information of gender and age the system will interact with the user and deliver appropriate information in an efficient way. 1. An efficient user interaction system comprising:a) a digital screen;b) an image capturing device attached to the digital screen to capture an image of a user;c) an image processing unit attached to the digital screen for analyzing a face image captured, detecting a gender and estimating an age of the user; andd) wherein the digital screen delivers correct and appropriate contents to the user based on the gender detected and the age estimated from the image processing unit.2. The system of claim 1 , wherein the digital screen includes a display unit for displaying the contents to the user after detecting the gender and estimating the age of the user.3. The system of claim 1 , wherein the image processing unit includes:a) face detection module for detecting a face area from an user image;b) a facial region extraction module for analyzing every part of a face of the user;c) a gender identification module for identifying the gender based on features extracted;d) an age estimation module for calculating the age of the user; ande) an intelligent interactive information delivery module based on the gender and the age of the user.4. The system of claim 1 , wherein the digital screen delivers appropriate information to the user.5. The system of claim 1 , where in the image capturing ...

Подробнее
19-03-2020 дата публикации

Anti-Spoofing

Номер: US20200089979A1
Принадлежит:

Anti-spoofing technology is provided for verifying a user of a fixed computer terminal. Image data of at least one verification image is received, as captured by an image capture device of the fixed computer terminal at a time corresponding to a request for access to a restricted function of the fixed computer terminal. User verification is applied to determine whether to grant access to the restricted function of the fixed computer terminal. A differential feature descriptor is determined, which encodes feature differences between the verification image data and image data of at least one unobstructed background image as captured by the image capture device. An anti-spoofing classifier processes the differential feature descriptor to classify it in relation to real and spoofing classes. Access to the restricted function of the fixed computer terminal is refused or granted based on the classification of the differential feature descriptor by the anti-spoofing classifier. 1. An anti-spoofing method for verifying a user of a fixed computer terminal , the anti-spoofing method comprising implementing , in a user verification processing system , the following:receiving image data of at least one verification image as captured by an image capture device of the fixed computer terminal at a time corresponding to a request for access to a restricted function of the fixed computer terminal; andin response to the access request, applying user verification to determine whether to grant access to the restricted function of the fixed computer terminal, by: determining, by a differential feature extractor, a differential feature descriptor encoding feature differences between the verification image data and image data of at least one unobstructed background image as captured by the image capture device,processing, by an anti-spoofing classifier, the differential feature descriptor to classify it in relation to real and spoofing classes corresponding, respectively, to verification ...

Подробнее
06-04-2017 дата публикации

Automated relationship categorizer and visualizer

Номер: US20170098120A1
Принадлежит: International Business Machines Corp

Aspects differentially drive the graphic display of links overlaid between people in a group photograph as a function of associated relationship type. Individuals are distinguished within the photograph and relationships are identified among the individuals that differ from one another with respect to type of relationship, by extracting relationship data via natural language processing relative to social network data of one or more of the identified individuals. The social network data includes business association data, family data and social network contact data. Thus, a graphic display device is driven to display the identified relationships via each of different overlay elements that are depicted directly on the photograph, link respective ones of the identified individuals, and use different graphic elements to convey differences in respective types of the relationships that are determined among the identified individuals.

Подробнее
28-03-2019 дата публикации

RECEPTION APPARATUS, RECEPTION SYSTEM, RECEPTION METHOD, AND STORAGE MEDIUM

Номер: US20190095750A1
Принадлежит: NEC Corporation

Provided are a reception apparatus, a reception system, a reception method, and a storage medium that can naturally provide a personal conversation in accordance with a user without requiring the user to register the personal information thereof in advance. A disclosure includes a face information acquisition unit that acquires face information of a user; a conversation processing unit that acquires reception information including a content of conversation with the user; a face matching unit that matches, against the face information of one user, the face information registered in a user information database in which user information including the face information of the user and the reception information is registered; and a user information management unit that, when a result of matching of the face information performed by the face matching unit is unmatched, registers the user information of the one user to the user information database. 116-. (canceled)17. A reception apparatus comprising:a face information acquisition unit that acquires face information of a user;a processing unit that acquires reception information of the user;a face matching unit that matches, against the face information of one user, the face information registered in a user information database in which user information including the face information of the user and the reception information is registered; anda user information management unit that, when a result of matching of the face information performed by the face matching unit is unmatched, registers the user information of the one user to the user information database,wherein, when a result of matching of the face information performed by the face matching unit is matched, the processing unit outputs audio data based on the user information of a user determined as the same person as the one user.18. A reception apparatus comprising:a face information acquisition unit that acquires face information of a user;a processing unit that ...

Подробнее
28-03-2019 дата публикации

METHOD OF USING APPARATUS, PRODUCT, AND SYSTEM FOR A NO TOUCH POINT-OF-SALE SELF-CHECKOUT

Номер: US20190096198A1
Принадлежит:

A method, computer program product, and system to perform a sale transaction are provided. The method includes identifying each item of a plurality of items, based on at least one image of the plurality of items, determining a cost for each item, optionally identifying a person based on an image of the person, adding each of the items and each of the costs to a sale transaction, and charging the person for the sale transaction. 1. A method of completing a sale transaction at a checkout area of an environment , the method comprising:acquiring images using one or more cameras in the environment outside the checkout area, wherein one or more items are represented in the images;identifying, by one or more processors communicatively coupled with the one or more cameras, the one or more items as they are being transported by a customer through the environment;maintaining, by the one or more processors, a transaction record associated with the customer, wherein maintaining the transaction record comprises updating a list of selected items using the identified one or more items;authenticating an identity of the customer at the checkout area; andresponsive to authenticating the customer at the checkout area, charging a cost of the selected items to a payment method having a predefined association with the customer.2. The method of claim 1 , wherein authenticating the customer at the checkout area comprises at least one of the following:identifying the customer using one or more other cameras in the environment at the checkout area;identifying the customer using a voice sample acquired by a microphone disposed at the checkout area;detecting a mobile computing device of the customer using a wireless network receiver disposed at the checkout area; anddetecting a predefined pass-code associated with the customer by the microphone.3. The method of claim 1 , wherein authenticating the customer at the checkout area comprises multiple-factor authentication.4. The method of claim 1 , ...

Подробнее
14-04-2016 дата публикации

APPARATUS AND METHOD FOR GENERATING FACIAL COMPOSITE IMAGE, RECORDING MEDIUM FOR PERFORMING THE METHOD

Номер: US20160104309A1

Disclosed is an apparatus for generating a facial composite image, which includes: a database in which face image and partial feature image information is stored; a wireframe unit configured to apply a face wireframe to a basic face sketch image, the face wireframe applying an active weight to each intersecting point; a face composing unit configured to form a two-dimensional face model to which the wireframe is applied, by composing images selected from the database; and a model transforming unit configured to transform the two-dimensional face model according to a user input on the basis of the two-dimensional face model to which the wireframe is applied. Accordingly, a facial composite image with improved accuracy may be generated efficiently. 1. An apparatus for generating a facial composite image , comprising:a database in which face image and partial feature image information is stored;a wireframe unit configured to apply a face wireframe to a basic face sketch image, the face wireframe applying an active weight to each intersecting point;a face composing unit configured to form a two-dimensional face model to which the wireframe is applied, by composing images selected from the database; anda model transforming unit configured to transform the two-dimensional face model according to a user input on the basis of the two-dimensional face model to which the wireframe is applied.2. The apparatus for generating a facial composite image according to claim 1 ,wherein the model transforming unit includes a facial impression transforming unit configured to automatically transform an appearance of the two-dimensional face model according to a user selection.3. The apparatus for generating a facial composite image according to claim 2 , wherein the facial impression transforming unit includes:an appearance estimating unit configured to generate an appearance estimation function by using a front face image, appearance scores collected through user evaluation and feature ...

Подробнее
13-04-2017 дата публикации

SYSTEMS AND METHODS FOR DETECTING, IDENTIFYING AND TRACKING OBJECTS AND EVENTS OVER TIME

Номер: US20170103256A1
Принадлежит:

A system for detecting, identifying and tracking objects of interest over time is configured to derive object identification data from images captured from one or more image capture devices. In some embodiments of the system, the one or more image capture devices perform a first object detection and identification analysis on images captured by the one or more image capture devices. The system may then transmit the captured images to a server that performs a second object detection and identification analysis on the captures images. In various embodiments, the second analysis is more detailed than the first analysis. The system may also be configured to compile data from the one or more image capture devices and server into a timeline of object of interest detection and identification data over time. 1. A system for detecting and tracking one or more events in an area of interest , the system comprising an image capture device comprising one or more cameras , at least one processor operatively coupled to the one or more cameras and memory operatively coupled to the at least one processor , wherein the at least one processor is configured to:a. capture a first plurality of images at the area of interest between a first start time and a first stop time;b. after capturing each one of the first plurality of images, analyze each one of the first plurality of images using a first detection method to detect a presence of one or more objects;c. after capturing each one of the first plurality of images, analyze each one of the first plurality of images using a second detection method to detect a presence of one or more faces near the one or more objects;d. in response to detecting the presence of the one or more objects, analyze each one of the first plurality of images using a third detection method to recognize each one of the one or more objects;e. in response to detecting the presence of the one or more faces near the one or more objects, analyze each one of the first ...

Подробнее
08-04-2021 дата публикации

MULTI-MODAL DETECTION ENGINE OF SENTIMENT AND DEMOGRAPHIC CHARACTERISTICS FOR SOCIAL MEDIA VIDEOS

Номер: US20210103762A1

A system and method for determining a sentiment, a gender and an age group of a subject in a video while the video is being played back. The video is separated into visual data and audio data, the video data is passed to a video processing pipeline and the audio data is passed to both an acoustic processing pipeline and a textual processing pipeline. The system and method performs, in parallel, a video feature extraction process in the video processing pipeline, an acoustic feature extraction process in the acoustic processing pipeline, and a textual feature extraction process in the textual processing pipeline. The system and method combines a resulting visual feature vector, acoustic feature vector, and a textual feature vector into a single feature vector, and determines the sentiment, the gender and the age group of the subject by applying the single feature vector to a machine learning model. 1. A system determining a sentiment , a gender and an age group of a subject in a video , the system comprising:a video playback device;a display device; anda computer system having circuitry,the circuitry configured towhile the video is being played back by the video playback device on the display device,separate the video into visual data and audio data,pass the video data to a video processing pipeline and pass the audio data to both an acoustic processing pipeline and a textual processing pipeline,perform, in parallel, a video feature extraction process in the video processing pipeline to obtain a visual feature vector, an acoustic feature extraction process in the acoustic processing pipeline to obtain an acoustic feature vector, and a textual feature extraction process in the textual processing pipeline to obtain a textual feature vector,combine the visual feature vector, the acoustic feature vector, and the textual feature vector into a single feature vector, anddetermine the sentiment, the gender and the age group of the subject by applying the single feature ...

Подробнее
08-04-2021 дата публикации

Methods and systems for predicting wait time of queues at service area

Номер: US20210103941A1
Принадлежит: Tata Consultancy Services Ltd

This disclosure relates generally to methods and systems for predicting wait time of queues at service area such as market places including retail stores and super markets. The present methods and systems accurately predicts the wait times of the plurality of queues by utilizing various visual cues of the customers, along with the number of service items and the efficiency of the service operator. The visual cues including a demographic factor such as age, gender, ethnicity of the customer and a senti-motional factor such as sentiments including positive attitude or negative attitude and emotions of the customers including happy state, sad state, and irritation state of the customer. The customers may choose the queue having least predicted wait time and may take informed decision hoping for faster check-out, based on the predicted wait times of the queues. Hence, customer experience and customer satisfaction may be achieved.

Подробнее
26-03-2020 дата публикации

DYNAMIC PROVISIONING OF DATA EXCHANGES BASED ON DETECTED RELATIONSHIPS WITHIN PROCESSED IMAGE DATA

Номер: US20200098048A1
Принадлежит:

The disclosed exemplary embodiments include computer-implemented systems, apparatuses, devices, and processes that, among other things, dynamically provision exchanges of data based on detected relationships within processed image data. For example, a network-connected apparatus may receive, from a device, image data that identifies a plurality of individuals associated with an exchange of data. Based on an analysis of the image data, the apparatus may determine a value of a first characteristic associated with each of the individuals and generate relationship data characterizing a relationship between the individuals. The apparatus may also determine candidate values of parameters that characterize the data exchange based on portions of the first characteristic values and the relationship data, transmit the candidate parameter values to the device. An application program executed by the device may cause the device to present at least a portion of the candidate parameter values within a digital interface. 1. An apparatus , comprising:a communications unit;a storage unit storing instructions; and receive a first signal from a device via the communications unit, the first signal comprising image data that identifies a plurality of individuals, the individuals being associated with an exchange of data;', 'based on an analysis of the image data, determine a value of a first characteristic associated with each of the individuals and generate relationship data characterizing a relationship between the individuals;', 'determine candidate values of parameters that characterize the data exchange based on portions of the first characteristic values and the relationship data; and', 'generate and transmit, to the device via the communications unit, a second signal that includes the candidate parameter values, the second signal comprising information that causes an application program executed by the device to present at least a portion of the candidate parameter values within a ...

Подробнее
26-03-2020 дата публикации

SYSTEM AND METHOD FOR COLLECTING AND USING FILTERED FACIAL BIOMETRIC DATA

Номер: US20200098223A1
Принадлежит:

Disclosed are a system and method for the application of filtering in the collection and application of facial biometrics to a number of problems common to the vending and gaming environments. A succession of video frames of a scene are analyzed to determine if one or more faces are present. If so, the face most relevant to the application based in its position in the scene is selected. The selected face in each of the succession of video frames is then quality rated according to certain criteria to select the best frame for computing a biometric value of the selected face. One the biometric value has been computed, the value may be compared against a database to determine if a biometric value for a matching face was previously stored. If so, the quality rating of the new image and a quality rating previously stored with the stored biometric are compared. If the new image has a higher quality rating than the stored quality rating, the new biometric replaces the stored biometric and its quality rating replaces the associated quality rating in storage. The biometric may then be applied to solving problems of patron analytics, patron loyalty, security and the like. 1. A method of operating a vending device comprising a processor , a memory and a camera for providing a video stream of images to the processor , the method comprising:providing at least one video stream of images from the camera to a facial recognition unit;detecting with the facial recognition unit at least one face from a plurality of faces in the video stream of images;selecting the one face from the plurality of faces; determining with the processor a distance of the selected face to the camera and rejecting the selected face if the distance exceeds a predetermined distance;', 'receiving from the camera a succession of images of the selected face from the video stream and determining with the processor, by analysis of the succession of images, whether the selected face is present for more than a ...

Подробнее
08-04-2021 дата публикации

ANALYZING FACIAL RECOGNITION DATA AND SOCIAL NETWORK DATA FOR USER AUTHENTICATION

Номер: US20210105272A1
Принадлежит: SOCURE INC.

Tools, strategies, and techniques are provided for evaluating the identities of different entities to protect business enterprises, consumers, and other entities from fraud by combining biometric activity data with facial recognition data for end users. Risks associated with various entities can be analyzed and assessed based on a combination of user liveliness check data, facial image data, social network data, and/or professional network data, among other data sources. In various embodiments, the risk assessment may include calculating an authorization score or authenticity score based on different portions or combinations of the collected and processed data. 1. A computer-implemented method , comprising:calculating, via a processor, a liveliness score based on a synchronization between an anatomical change of a user and a feature output via the software application;calculating, via the software application, a first score for the user based on the calculated liveliness score;determining that the first score is insufficient for authentication of the user; andcombining the first score with a second score to form a combined score, in response to determining that the first score is insufficient for authentication of the user, the second score being calculated based on social network connections of the user.2. The method of claim 1 , wherein the anatomical change includes movement of at least one lip of the user claim 1 , and the feature displayed via the facial recognition software is text displayed on a screen of a compute device.3. The method of claim 1 , wherein the anatomical change is associated with a recital of a phrase by the user.4. The method of claim 1 , wherein the anatomical change includes at least one of: an eye blink claim 1 , an eye movement claim 1 , a head movement claim 1 , a lip movement claim 1 , a hand movement claim 1 , or an arm movement.5. The method of claim 1 , wherein the anatomical change occurs during a predefined time period of a video ...

Подробнее
04-04-2019 дата публикации

FACIAL RECOGNITION ENCODE ANALYSIS

Номер: US20190102607A1
Автор: Aas Cecilia J.
Принадлежит:

A method for facial recognition encode analysis comprises providing a training set of Gabor encoded arrays of face images from a database; and, for each encode array in the training set, evaluating the Gabor data to determine the accuracy of the fiducial points on which the encode array is based. The method also comprises training an outlier detection algorithm based on the evaluation of the encode arrays to obtain a decision function for a strength of accuracy of fiducial points in the encode arrays; and outputting the decision function for application to an encode array to be tested. 1. A system for facial recognition encode analysis , the system comprising:an input/output interface configured to receive an encode array of a face image; anda processor communicatively coupled to the input/output interface, wherein the processor is configured to:apply, to the received encode array, a decision function resulting from training an outlier detection algorithm based on a determined accuracy of fiducial points on which a training set of Gabor encoded arrays of face images is based; anddetermine from a result of applying the decision function to the received encode array whether the received encode array is based on a face image with correctly identified fiducial point locations.2. The system of claim 1 , wherein the processor is configured to train the outlier detection algorithm to obtain the decision function.3. The system of claim 1 , wherein the determined accuracy of the fiducial points is based on an evaluation of each encode array in the training set of Gabor encoded arrays by determining an average across all sampling locations of a face image of a Gabor response for multiple Gabor wavelet orientations relative to an eye-to-eye horizontal line of the face image.4. The system of claim 1 , wherein the determined accuracy of the fiducial points is based on an evaluation of each encode array in the training set of Gabor encoded arrays by determining a set of ...

Подробнее
26-03-2020 дата публикации

METHOD AND SYSTEM FOR REMOTELY CONTROLLING SMART TELEVISION BASED ON MOBILE TERMINAL

Номер: US20200099980A1
Автор: Zhou Xiaolu, Zhou Xiuling
Принадлежит:

A method and a system for remotely controlling a smart television based on a mobile terminal and the mobile terminal are disclosed. The method includes: detecting a state of the smart television every predetermined time interval by the mobile terminal, and receiving a command input from a user to acquire a type of an audience watching the smart television; determining whether a current playing time, a playing content and a playing duration conform to conditions preset by a user, when the audience watching the smart television is a minor; and detecting whether the smart television is in a turned-on state when the audience watching the smart television is an older person. 1. A mobile terminal , comprising:at least one processor; andat least one memory,wherein the at least one memory is configured to instructions and data, and the at least one processor is configured to execute the instructions to perform the steps of:establishing a link between the mobile terminal and a smart television, detecting a state of the smart television every predetermined time interval by the mobile terminal, receiving, by the mobile terminal an image located in front of the smart television and transmitted by the smart television, extracting facial features in the image according to an image processing algorithm, and determining that an audience watching the smart television is a minor or an older person according to the facial features in the image;acquiring, by the mobile terminal, a current playing time, a playing content and a playing duration of the smart television and determining whether the current playing time, the playing content and the playing duration conform to conditions preset by a user, when the audience watching the smart television is the minor; if yes, the smart television continues to play; if no, the smart television is controlled to display a hint of switching a channel or to increase or decrease watching time according to answer results of the audience after entering ...

Подробнее
21-04-2016 дата публикации

SYSTEM, APPARATUS AND METHOD FOR DYNAMICALLY ADJUSTING A VIDEO PRESENTATION BASED UPON AGE

Номер: US20160109942A1
Принадлежит: Bally Gaming, Inc.

A system, method and apparatus are set forth which adjusts one or more of the brightness, vibrancy and color shift of displayed content based upon the at least approximate age of the viewer. At a display () the user's age is at least approximated by accessing an established user data file () containing age determining data and/or capturing a facial image () of the user and processing the same to determine at least the approximate age of the user. Based upon the age determination the brightness, vibrancy and/or color shift may be adjusted to account for the effects of the aging of the human eye. User overrides () may be provided for the user to alter or turn off the adjustments. Adjustment of the brightness, vibrancy and/or color shift may also take into account ambient light conditions. 1. An apparatus with a video display for displaying video content to a user , said apparatus characterized by a facility for providing age data of at least the approximate age of the user to a processor and said processor configured to receive said age data and for adjusting at least one of the brightness , the vibrancy and a color shift for the content displayed at the video display based upon said at least approximate age of the user.2. The apparatus of characterized by said facility is a camera disposed to provide image data corresponding to the capture a facial image of the user and said processor is configured for determining said age data from said facial image data.3. The apparatus of characterized by a controller for a user to control said processor to adjust at least one of the brightness claim 1 , the vibrancy and a color shift for the content displayed at the video display.4. The apparatus of characterized by said facility includes an interface for retrieval of said age data from a data structure storing said age data.5. The apparatus of characterized by a memory device storing one or more default settings for at least one of the brightness claim 1 , the vibrancy and a ...

Подробнее
21-04-2016 дата публикации

METHOD, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR PROVIDING A SENSOR-BASED ENVIRONMENT

Номер: US20160110799A1
Принадлежит:

Method, computer program product, and system to provide assistance to at least a first person during a transaction within an environment having a plurality of items. The method includes identifying the first person within the environment, and analyzing acquired image information to determine at least one item interaction of the transaction and thereby associate the identified first person with the transaction. The method further includes determining whether the first person is associated with a personal profile that includes information related to the environment, the information including at least one of personal preferences and personal historical data reflecting one or more previous transactions of the first person. When the first person is determined to be associated with a personal profile, the method further includes determining, based on the information in the personal profile, an amount of assistance to provide to the first person during the transaction. 1. A computer-implemented method to provide assistance to at least a first person during a transaction within an environment having a plurality of items , the transaction including at least a first item interaction performed by the first person , the method comprising:identifying the first person within the environment;analyzing acquired image information to determine the first item interaction and thereby associate the identified first person with the transaction;determining whether the first person is associated with a personal profile that includes information related to the environment, wherein the information includes at least one of personal preferences and personal historical data reflecting one or more previous transactions of the first person; and 'determining, based on the information in the personal profile, an amount of assistance to provide to the first person during the transaction.', 'when the first person is determined to be associated with a personal profile2. The computer-implemented method ...

Подробнее
19-04-2018 дата публикации

METHOD, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR PRODUCING COMBINED IMAGE INFORMATION TO PROVIDE EXTENDED VISION

Номер: US20180108074A1
Принадлежит:

Method, computer program product, and system to provide an extended vision within an environment having a plurality of items, where the extended vision is based on a field of view of a person determined using a first visual sensor, and is further based on at least a second visual sensor disposed within the environment. Image information from the first and second visual sensors is associated to produce combined image information. Selected portions of the combined image information are displayed based on input provided through a user interface. 1. A computer-implemented method to provide an extended vision within an environment having a plurality of items , the extended vision based on a field of view of a person having a first computing device coupled with a first visual sensor within the environment , the environment associated with at least a second computing device coupled with a plurality of second visual sensors disposed within the environment , at least one of the first computing device and the second computing device coupled with a display device and including a user interface (UI) for traversing the extended vision , the method comprising:analyzing first image information acquired using the first visual sensor to determine the field of view of the person, the first image information including one or more first items of the plurality of items;analyzing second image information acquired using the plurality of second visual sensors, the second image information including one or more second items of the plurality of items;determining that an arrangement of the plurality of second visual sensors results in at least a first deadspace area representing a first portion of the environment that is not acquired in the image information of any of the plurality of second visual sensors;transmitting, to the first computing device, a prompt suggesting that the person should acquire image information corresponding to the first deadspace area, wherein the prompt comprises a ...

Подробнее
11-04-2019 дата публикации

FACE RECOGNITION BASED ON SPATIAL AND TEMPORAL PROXIMITY

Номер: US20190108389A1
Принадлежит:

In one embodiment, a method includes accessing an image file associated with a first user of a communication system and detecting a face in an image corresponding to the image file. The method also includes accessing an event database associated with the communication system, the event database containing one or more events, each being associated with the first user and one or more second users of the communication system. The method also includes determining one or more candidates among the second users to be matched to the face, where each candidate is associated with an event in the communication system, and where a time associated with the image is in temporal proximity to a time associated with the event. 1. A method comprising:by one or more computing devices, accessing an image file associated with a first user of a communication system;by the one or more computing devices, detecting one or more faces in an image corresponding to the image file;by the one or more computing devices, matching one or more of the detected faces to one or more second users associated with the communication system;by the one or more computing devices, retrieving profile information, social-graph information, or affiliation information associated with one or more of the second users from a user profile database associated with the communication system; andby the one or more computing devices, for at least one of the matched faces, providing for display a frame in proximity to the matched face, wherein the frame comprises the retrieved information associated with the second user matched to the face, and wherein the frame is at least partially overlaying the image.2. The method of claim 1 , wherein the frame further comprises one or more selectable icons.3. The method of claim 2 , further comprising:by the one or more computing devices, receiving a user input associated with a selection of one of the selectable icons; andby the one or more computing devices, tagging the second user ...

Подробнее
02-04-2020 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, PROGRAM, AND RECORDING MEDIUM

Номер: US20200104576A1
Автор: TANAKA Yuto, USUKI Masaya
Принадлежит: FUJIFILM Corporation

Provided are an image processing device, an image processing method, a program, and a recording medium which are capable of classifying a plurality of persons appearing in an image set into groups. 1. An image processing device comprising:an image set receiving unit that receives an input of an image set;a person determining unit that determines a plurality of persons appearing in the image set;a co-occurrence relation storage unit that stores co-occurrence relation information indicating that two or more persons of the plurality of persons have a co-occurrence relation in an image in a case where the two or more persons appear in the image based on a determination result of the plurality of persons using the person determining unit for each image included in the image set;a co-occurrence score calculating unit that calculates a co-occurrence score indicating strength of the co-occurrence relation of two persons in the image set based on all the co-occurrence relation information items in the image for each permutation of the two persons of the plurality of persons; anda person classifying unit that classifies at least a part of the plurality of persons into groups based on all the co-occurrence scores of the permutations of the two persons in the image set.2. The image processing device according to claim 1 ,wherein the co-occurrence relation storage unit stores, as the co-occurrence relation information, a flag indicating whether or not each of the plurality of persons appears in the image, for each image.3. The image processing device according to claim 1 ,wherein the co-occurrence score calculating unitcalculates the co-occurrence scores of the permutations of the two persons in the image by 1/(n−1) for each image in a case where n persons including the two persons appears in the image, in which n is an integer of 2 or more,sets the co-occurrence scores of the permutations of the two persons in the image as zero in a case where the two persons do not appear in ...

Подробнее
11-04-2019 дата публикации

VEHICLE CONTENT RECOMMENDATION USING COGNITIVE STATES

Номер: US20190110103A1
Принадлежит: AFFECTIVA, INC.

Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles. 1. A computer-implemented method for content manipulation comprising:obtaining one or more images of a vehicle occupant using one or more imaging devices within a vehicle, wherein the one or more images include facial data of the vehicle occupant;obtaining a content ingestion history of the vehicle occupant, wherein the content ingestion history includes one or more audio or video selections;analyzing, using a first computing device, the one or more images to determine a cognitive state of the vehicle occupant;correlating the cognitive state to the content ingestion history using a second computing device; andrecommending to the vehicle occupant one or more further audio or video selections, based on the cognitive state, the content ingestion history, and the correlating.2. The method of wherein the recommending occurs while the vehicle occupant occupies the vehicle.3. The method of wherein the recommending occurs after the vehicle occupant leaves the vehicle.4. The method of further comprising comparing the ...

Подробнее
18-04-2019 дата публикации

USER INTERFACE CUSTOMIZATION BASED ON FACIAL RECOGNITION

Номер: US20190114060A1
Автор: Resudek Timothy
Принадлежит:

There are provided systems and methods for user interface customization based on facial recognition. A computing device, such as a mobile smart phone, may include one or more imaging components, such as a camera. The camera may capture still or video media data of a user during use of the device. Using the media data, the user may be recognized or may be identified as an unknown user, such as an unauthorized user. If the user corresponds to a different user from an administrator or owner of the device, the device may utilize an identity, known or unknown, for the user to determine what user interface actions and data the user is allowed to view on the device. The device may restrict particular user interface data from viewing, and may also limit the user from interacting with particular interface elements or initiating interface processes or navigation. 1. A mobile device system comprising:a camera that captures first media data of a first user;a non-transitory memory storing a user interface (UI) customization settings for a UI, wherein the UI customization setting changes the UI based on identities of users; and receiving, from the camera, the first media data of the first user during use of the mobile device system by the first user;', 'determining a first identity of the first user using the first media data;', 'determining a first UI display parameter of the UI based on the first identity and the UI customization parameter; and', 'displaying, using an output component of the mobile device system, content on the UI based on the first UI display parameter., 'one or more hardware processors configured to execute instructions to cause the system to perform operations comprising2. The mobile device system of claim 1 , wherein the first media data captured by the camera comprises one of image data or video data claim 1 , wherein the one of the image data or the video data comprises a representation of the user claim 1 , and wherein the determining the first identity ...

Подробнее
09-04-2020 дата публикации

METHOD AND SYSTEM FOR PROVIDING CONTROL USER INTERFACES FOR HOME APPLIANCES

Номер: US20200110532A1
Автор: Fan Yi, MANI Suresh, OU Zhicai
Принадлежит:

A method and system of providing a control user interface at a first home appliance are disclosed, the method including detecting presence of a user within a threshold range of the first home appliance, and performing image processing on one or more real-time images of the user to determine one or more characteristics of a facial image of the user; determining at least a first parameter that is configured to trigger a first change in a current control user interface configuration for the first home appliance; and activating a first control user interface configuration corresponding to the first parameter for the first home appliance while the presence of the first user continues to be detected within the threshold range of the first home appliance. 1. A method of providing a control user interface at a first home appliance , comprising: detecting, via one or more cameras that are collocated with the first home appliance, presence of a first user within a first threshold range of the first home appliance;', 'in response to detecting the presence of the first user within the first threshold range of the first home appliance, performing image processing on one or more images of the first user that are captured by the one or more cameras to determine one or more characteristics of a facial image of the first user in one or more images;', 'in accordance with the one or more characteristics of the facial image of the first user that are determined from the one or more images captured by the one or more cameras, determining at least a first parameter that is configured to trigger a first change in a current control user interface configuration for the first home appliance; and', 'activating a first control user interface configuration corresponding to the first parameter for the first home appliance while the presence of the first user continues to be detected within the first threshold range of the first home appliance., 'at a computing system having one or more ...

Подробнее
27-04-2017 дата публикации

ANALYZING FACIAL RECOGNITION DATA AND SOCIAL NETWORK DATA FOR USER AUTHENTICATION

Номер: US20170118207A1
Принадлежит:

Tools, strategies, and techniques are provided for evaluating the identities of different entities to protect business enterprises, consumers, and other entities from fraud by combining biometric activity data with facial recognition data for end users. Risks associated with various entities can be analyzed and assessed based on a combination of user liveliness check data, facial image data, social network data, and/or professional network data, among other data sources. In various embodiments, the risk assessment may include calculating an authorization score or authenticity score based on different portions or combinations of the collected and processed data. 1. A computer-implemented method for calculating an authenticity score for a user , the method comprising:a) calculating, by a facial evaluation system operatively associated with an electronic processor, a liveliness check score in response to analysis of image data associated with at least one anatomical change associated with an interaction between the user and a facial recognition application; i. generating a facial vector map based on image data associated with the user,', 'ii. collecting data derived from at least one access device associated with the user or at least one identity input associated with the user, and', 'iii. comparing at least a portion of the facial vector map data, the access device data, or the identity input data against at least a portion of image data stored in at least one database; and, 'b) calculating, by the facial evaluation system, a facial confirmation score byc) calculating, by the facial evaluation system, an authorization score for the user in response to the calculated liveliness check score and the calculated facial confirmation score.2. The method of claim 1 , further comprising collecting the image data associated with the user with a camera-enabled and network connectable access device.3. The method of claim 1 , further comprising calculating at least one of the ...

Подробнее
25-04-2019 дата публикации

CONTAINER, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM, AND CUSTOMER MANAGEMENT METHOD

Номер: US20190122231A1
Принадлежит: FUJITSU LIMITED

A container includes: a sensor configured to output a signal regarding a situation around the container; a memory; and a processor coupled to the memory, the processor being configured to execute a recognition process that includes recognizing the situation around the container in accordance with the signal from the sensor, execute a determination process that includes determining, in accordance with a result of the recognition, a group attribute to which one or more persons around the container belong, execute a registration process that includes registering the group attribute to the memory associatively with the container. 1. A container comprising:a sensor configured to output a signal regarding a situation around the container;a memory; and execute a recognition process that includes recognizing the situation around the container in accordance with the signal from the sensor,', 'execute a determination process that includes determining, in accordance with a result of the recognition, a group attribute to which one or more persons around the container belong,', 'execute a registration process that includes registering the group attribute to the memory associatively with the container., 'a processor coupled to the memory, the processor being configured to'}2. The container according to claim 1 ,wherein the sensor is a camera configured to capture a periphery of the container,wherein the recognition process is configured to recognize the situation around the container by performing an image recognition on an image captured by the camera.3. The container according to claim 1 ,wherein the sensor is wireless communication circuitry configured to receive a radio signal that represents an image captured by a camera,wherein the recognition process is configured to recognize the situation around the container by performing an image recognition on the image represented by the radio signal.4. The container according to claim 1 ,wherein the sensor is wireless communication ...

Подробнее
27-05-2021 дата публикации

INFORMATION SENDING METHOD, APPARATUS AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20210157869A1
Принадлежит:

The present disclosure relates to an information sending method, apparatus, system and a computer readable storage medium, and pertains to the technical field of computers. The method in the present disclosure includes: analyzing video data of an offline user to determine an attribute of the offline user; searching for historical access information of at least one online user matching the attribute of the offline user; and determining at least one object recommended to the offline user according to the historical access information of each of the at least one online user, and sending information of the at least one object to the offline user. 1. An information sending method , comprising:analyzing video data of an offline user to determine an attribute of the offline user;searching for historical access information of at least one online user matching the attribute of the offline user; anddetermining at least one object recommended to the offline user according to the historical access information of each of the at least one online user, and sending information of the at least one object to the offline user.2. The information sending method according to claim 1 , wherein determining at least one object recommended to the offline user according to the historical access information of each of the at least one online user comprises:constructing an object recommendation set according to at least one object historically accessed by each of the at least one online user; anddetermining the at least one object recommended to the offline user according to a recommendation metric value of each object in the object recommendation set.3. The information sending method according to claim 2 , wherein constructing an object recommendation set according to at least one object historically accessed by each of the at least one online user comprises:selecting at least one object, to which the number of times of access exceeds a corresponding threshold from objects historically ...

Подробнее
11-05-2017 дата публикации

Mobile Phone

Номер: US20170134566A1
Автор: CHIANG Kuo-Ching
Принадлежит:

A mobile phone comprises a control unit and a memory coupled to the control unit. A sensor is formed in the mobile phone for capturing a user's face image. A mimicked age estimation module is coupled to the control unit to generate a mimicked age of the user by the captured face image. An age identification module is coupled to the control unit for age identification. 1. A mobile phone , comprising:a control unit;a memory coupled to said control unit;a sensor formed in said mobile phone for capturing a user's face image;a mimicked age estimation module coupled to said control unit to generate a mimicked age of said user by said captured face image; andan age identification module coupled to said control unit for age identification.2. The mobile phone of claim 1 , further comprising a light source formed on said mobile phone to project light to said user's face.3. The mobile phone of claim 1 , further comprising a camera formed on said mobile phone.4. The mobile phone of claim 1 , wherein a video message is sent to a remote web system by WiFi or mobile communication protocol from said mobile phone.5. The mobile phone of claim 1 , wherein a specified website can be accessed if said mimicked age of said user exceeds a set age.6. The mobile phone of claim 1 , wherein said sensor is an image capturing device.7. The mobile phone of claim 1 , further comprising a voiceprint template.8. The mobile phone of claim 7 , further comprising a microphone.9. The mobile phone of claim 1 , wherein a website or vending machine is accessed based-on said mimicked age of said user.10. The mobile phone of claim 1 , wherein said mobile phone allows access based-on said mimicked age of said user.11. A mobile phone claim 1 , comprising:a control unit;a memory coupled to said control unit;a sensor formed in said mobile phone for capturing a user's face image;a mimicked age estimation module coupled to said control unit to generate a mimicked age of said user by said captured face image; anda ...

Подробнее
03-06-2021 дата публикации

Smart badge, and method, system and computer program product for badge detection and compliance

Номер: US20210166002A1
Принадлежит: Motorola Solutions Inc

A smart badge, and a method, system and computer program product for badge detection and compliance are disclosed. The method, carried out within a security system, includes capturing, using a camera, an image of a person. The captured image includes a face of the person within a first pixel region of the image. The method also includes performing facial recognition on the first pixel region to determine an identity of the person. The method also includes performing video analytics, on a second pixel region of the image, different than the first pixel region, to make a first determination that the identified person is wearing a badge, or to make a second determination that no badge is being properly worn by the identified person. The method also includes generating an alert, specific to the identified person, within the security system based at least in part on the first or second determination.

Подробнее
03-06-2021 дата публикации

INFORMATION OUTPUT DEVICE, METHOD, AND PROGRAM

Номер: US20210166265A1

An information output apparatus according to an embodiment includes: first estimation means for estimating an attribute indicating a feature unique to a user, based on video data; second estimation means for estimating a current action state of the user, based on face orientation data and position data of the user; determination means for, in an action-merit table that defines combinations each composed of an action for inducing a user to use a service according to an attribute and a state, and a value indicating a magnitude of a merit of the action, determining an action for inducing the user to use a service with a high value indicating the magnitude of the merit of the action, out of combinations corresponding to the estimated attribute and state; setting means for setting a reward value for the action, based on action states estimated; setting means for setting a reward value for the action, based on action states estimated before and after the action; and update means for updating the value of the action merit, based on the reward value. 1. An information output apparatus comprising:a processor; anda storage medium having computer program instructions stored thereon, when executed by the processor, perform to:detect face orientation data and position data regarding a user, based on video data regarding the user;estimate an attribute indicating a feature unique to the user, based on the video data;estimate a current action state of the user, based on the face orientation data and the position data detected;a storage unit having stored therein an action-merit table that defines combinations each composed of an action for inducing a user to use a service according to an attribute and an action state of the user, and a value indicating a magnitude of a merit of the action; determination means for determining an action for inducing the user to use a service with a high value indicating the magnitude of the merit of the action, out of combinations corresponding to ...

Подробнее
03-06-2021 дата публикации

VEHICLE CONTROL APPARATUS AND METHOD USING SPEECH RECOGNITION

Номер: US20210166683A1
Автор: JOH Jae Min
Принадлежит:

A vehicle control apparatus and method use speech recognition and include: a passenger recognizing device configured to recognize passengers including a first passenger and at least one second passenger in a vehicle; a voice recognizing device configured to receive and to recognize a voice utterance by the first passenger or the at least one second passenger and to output a speech recognition result based on the received voice utterance; and a processor configured to additionally query the at least one second passenger or the first passenger based on the speech recognition result of the voice utterance of the first passenger or the at least one second passenger, respectively, to provide each of the first passenger and the at least one second passenger with a customized service. 1. A vehicle control apparatus , the apparatus comprising:a passenger recognizing device configured to recognize passengers including a first passenger and at least one second passenger in a vehicle;a voice recognizing device configured to receive and to recognize a voice utterance by the first passenger or the at least one second passenger and configured to output a speech recognition result based on the received voice utterance; anda processor configured to additionally query the at least one second passenger or the first passenger based on the speech recognition result of the voice utterance of the first passenger or the at least one second passenger, respectively, to provide each of the first passenger and the at least one second passenger with a customized service.2. The apparatus of claim 1 , wherein the passenger recognizing device is configured to recognize a presence and a location of the passengers through a weight sensor installed for each seat.3. The apparatus of claim 2 , wherein the passenger recognizing device is configured to recognize a face of each of the first passenger and the at least one second passenger through a camera to estimate an age of each of the first passenger ...

Подробнее
17-05-2018 дата публикации

Image processing apparatus, image processing method, and storage medium

Номер: US20180137344A1
Автор: Minoru Kusakabe
Принадлежит: Canon Inc

An image processing apparatus includes an object detection unit configured to detect objects from a plurality of images captured at different times, based on a degree of matching with a predetermined criterion, a determination unit configured to determine whether the objects detected by the object detection unit include an object that is the same object as an object detected within an image captured at another time, and an attribute detection unit configured to perform processing for extracting one or more object images, based on the degree of matching, from a plurality of object images corresponding to the object determined to be the same object by the determination unit, and detecting an attribute of the same object with respect to the extracted one or more object images.

Подробнее
09-05-2019 дата публикации

METHOD AND APPARATUS FOR FACIAL AGE IDENTIFICATION, AND ELECTRONIC DEVICE

Номер: US20190138787A1
Автор: Li Cheng, Zhang Yunxuan

A method and an apparatus for facial age identification, an electronic device, and a computer readable medium include: obtaining estimated facial age of a person in an image to be identified; selecting N image samples from an image sample set of known age according to the estimated facial age and age of two or more preset age gaps with the estimated facial age, the N being not less than 2; obtaining a comparison result of ages between the image to be identified and the selected N image samples; and obtaining probability information for determining a person's facial age attribute information according to statistical information formed by the comparison result. 1. A method for facial age identification , comprising:obtaining estimated facial age of a person in the image to be identified;selecting N image samples from an image sample set of known age according to the estimated facial age and age of two or more preset age gaps with the estimated facial age, N being not less than 2;obtaining a comparison result of facial ages between the image to be identified and the selected N image samples; andobtaining probability information for determining the person's facial age attribute information according to statistical information formed by the comparison result.2. The method according to claim 1 , wherein the obtaining probability information for determining a person's facial age attribute information according to statistical information formed by the comparison result comprises:obtaining a first facial age posterior probability distribution according to a first preset facial age prior probability distribution and a first likelihood function formed based on the comparison result;wherein the facial age posterior probability distribution is configured to determine a person's facial age attribute information.3. The method according to claim 1 , wherein the obtaining estimated facial age of a person in an image to be identified comprises:inputting the image to be identified to ...

Подробнее
30-04-2020 дата публикации

Method and Device for Face Recognition, Storage Medium, and Electronic Device

Номер: US20200134294A1
Автор: LIANG Kun

A method and device for face recognition, a storage medium, and an electronic device are provided. The method includes the following. Face data to-be-tested is obtained. A first derived face data set related to the face data to-be-tested is generated according to the face data to-be-tested. For each of multiple derived face data in the first derived face data set, perform age determination, and generate an age distribution interval corresponding to the first derived face data set. Whether the age distribution interval matches a first reference age interval is determined. Upon determining that the age distribution interval matches the first reference age interval, age data corresponding to the face data to-be-tested is obtained according to the age distribution interval. 1. A method for face recognition , comprising:obtaining face data to-be-tested;generating, according to the face data to-be-tested, a first derived face data set related to the face data to-be-tested, the first derived face data set comprising multiple different derived face data;for each of the multiple different derived face data in the first derived face data set, performing age determination, and generating an age distribution interval corresponding to the first derived face data set;determining whether the age distribution interval matches a first reference age interval; andobtaining, according to the age distribution interval, age data corresponding to the face data to-be-tested, upon determining that the age distribution interval matches the first reference age interval.2. The method of claim 1 , whereinthe first derived face data set is generated according to the face data to-be-tested based on a predetermined face data generating model; andthe method further comprises: obtaining reference face data;', 'generating, according to the reference face data, a second derived face data set related to the reference face data, wherein the second derived face data set comprises multiple different ...

Подробнее
10-06-2021 дата публикации

Method and device for age estimation

Номер: US20210174066A1
Автор: Xuanping LI

Provided in embodiments of the present application are a method and device for age estimation. The method comprises: performing gender training with respect to a gender model on the basis of facial image samples so as to allow the gender model to converge, where the gender model comprises at least two convolution layers; performing age training with respect to an age model on the basis of the facial samples so as to allow the age model to converge, where the age model comprises the at least two convolution layers, the converged age model comprises the weights of the at least two convolution layers, and the weights of the at least two convolution layers that the converged gender model comprises; and performing age estimation with respect to an inputted facial image on the basis of the converged age model. The technical solution provided in the embodiments of the present application eliminates the problem of inaccurate age estimation as a result of gender differences of facial images, thus increasing the accuracy of age estimation.

Подробнее
09-05-2019 дата публикации

ANALYZING FACIAL RECOGNITION DATA AND SOCIAL NETWORK DATA FOR USER AUTHENTICATION

Номер: US20190141034A1
Принадлежит: SOCURE INC.

Tools, strategies, and techniques are provided for evaluating the identities of different entities to protect business enterprises, consumers, and other entities from fraud by combining biometric activity data with facial recognition data for end users. Risks associated with various entities can be analyzed and assessed based on a combination of user liveliness check data, facial image data, social network data, and/or professional network data, among other data sources. In various embodiments, the risk assessment may include calculating an authorization score or authenticity score based on different portions or combinations of the collected and processed data. 1. (canceled)2. A computer-implemented method , comprising:detecting, via a facial recognition software application running on an electronic processor, an anatomical change of a user;calculating, via the facial recognition software application running on the electronic processor, a liveliness score based on a synchronization between the anatomical change and a feature displayed via the facial recognition software;receiving, via the facial recognition software application, user data including at least one of (1) identity data associated with the user, (2) image data associated with the user, and (3) device data associated with the user;comparing the received user data with stored user data;calculating, via the facial recognition software application, an authorization score for the user based on at least one of the calculated liveliness check score and the comparison of the received user data with the stored user data; andone of combining or replacing the authorization score with a risk score if the authorization score is determined insufficient for authentication of the user.3. The method of claim 2 , wherein the anatomical change includes movement of at least one lip of the user claim 2 , and the feature displayed via the facial recognition software is text displayed on a screen of a compute device.4. The ...

Подробнее
31-05-2018 дата публикации

SYSTEMS, METHODS, AND DEVICES FOR INFORMATION SHARING AND MATCHING

Номер: US20180150683A1
Автор: Gordon Simon, Wood Andrew
Принадлежит:

The present invention relates to systems, methods, and devices for correlating and sharing information. In particular, the invention relates to systems, methods, and devices for identifying subjects of interest suspected of involvement in one or more crimes, and sharing and correlating information relating to subjects and events. In further aspects of the invention, methods and system are provided for sharing alerts between users of electronic devices. 179-. (canceled)80. A method comprising:receiving an image of a first subject of interest at a remote server and storing the image in a memory of the remote server;transmitting the image from the remote server to one or more local electronic devices together with a first identifier; receiving the image and first identifier from the remote server;', 'processing the image using facial recognition software to create first biometric data; and', 'at a first one of the local electronic devices:', 'capturing an image of a second subject of interest at a surveillance system connected to the first local electronic device;', 'processing the image of the second subject of interest with the facial recognition software of the first local electronic device to produce second biometric data;', 'determining whether the first subject of interest is the same as the second subject of interest by comparing the first biometric data to the second biometric data; and', 'upon determining that the first subject of interest is the same as the second subject of interest, transmitting a first alert associated with the first identifier to the remote server;, 'at each local electronic devicereceiving the first alert at the remote server; andupon receipt of the first alert, transmitting a second alert from the remote server to one or more local electronic devices, optionally including the first local electronic device.81. The method of claim 80 , further comprising claim 80 , prior to the step of transmitting the image from the remote server to the ...

Подробнее
31-05-2018 дата публикации

AGE AND GENDER ESTIMATION USING SMALL-SCALE CONVOLUTIONAL NEURAL NETWORK (CNN) MODULES FOR EMBEDDED SYSTEMS

Номер: US20180150684A1
Принадлежит: Shenzhen AltumView Technology Co., Ltd.

Embodiments described herein provide various examples of an age and gender estimation system capable of performing age and gender classifications on face images having sizes greater than the maximum number of input pixels supported by a given small-scale hardware convolutional neutral network (CNN) module. In some embodiments, the proposed age and gender estimation system can first divide a high-resolution input face image into a set of image patches with judiciously designed overlaps among neighbouring patches. Each of the image patches can then be processed with a small-scale CNN module, such as the built-in CNN module in Hi SoC. The outputs corresponding to the set of image patches can be subsequently merged to obtain the output corresponding to the input face image, and the merged output can be further processed by subsequent layers in the age and gender estimation system to generate age and gender classifications for the input face image. 1. A method for performing age and gender estimation on face images using a small-scale convolutional neutral network (CNN) module associated with a maximum input size constraint , the method comprising:receiving, by a computer, an input face image which is primarily occupied by a human face;determining, using the computer, if the size of the input face image is greater than the maximum input image size supported by the small-scale CNN module according to the maximum input size constraint; and partitioning the input face image into a set of subimages of the second size;', 'processing the set of subimages using the small-scale CNN module to generate an array of feature maps;', 'merging the array of feature maps into a set of merged feature maps corresponding to the input face image; and', 'processing the set of merged feature maps with two or more fully-connected layers to generate one or both of age and gender classifications for the person in the input face image., 'if so,'}, 'if so, determining if the size of the input face ...

Подробнее
31-05-2018 дата публикации

DETECTING USER VIEWING DIFFICULTY FROM FACIAL PARAMETERS

Номер: US20180150692A1
Автор: CHO Young Eun
Принадлежит:

A method to determine whether a user is experiencing difficulty visually resolving content is disclosed. The method includes capturing one or more images of the user while the user is viewing the content. The method also includes obtaining facial parameters related to a visual acuity of the user from the captured one or more images. The method further includes determining whether the user is experiencing difficulty visually resolving the content based on the obtained one or more facial parameters. The method is implemented in a device such as a smartphone, tablet computer, or television. The facial parameters include information about the extent to which the user has their eyes open or closed, whether the user is wearing glasses, and the distance at which the user is viewing the content. 1. A method for operating an electronic device , the method comprising:obtaining an image of a face of a user viewing a first content;obtaining a wrinkle parameter related to the user from the obtained image; anddisplaying a second content, wherein the second content is a content in which the first content is enlarged according to the wrinkle parameter.2. The method of claim 1 , wherein the displaying the second content comprises:using artificial intelligence (AI) algorithm based on the wrinkle parameter; anddisplaying the second content according to an output of the AI algorithm.3. The method of claim 1 , further comprising:determining that the user views the first content displayed on the electronic device by an eye tracking;detecting an eye scroll pattern by performing the eye tracking using a plurality of images of the user captured while the first content is displayed; andcomparing the detected eye scroll pattern to a predefined eye scroll pattern for the displayed first content.4. The method of claim 1 , wherein the displaying the second content comprises changing a brightness or contrast of the second content.5. The method of claim 1 , wherein the image comprises a video for ...

Подробнее
31-05-2018 дата публикации

PASSENGER INFORMATION DETECTION DEVICE AND PROGRAM

Номер: US20180150707A1
Автор: Fujii Hiroyuki, OSUGA Shin
Принадлежит: AISIN SEIKI KABUSHIKI KAISHA

A passenger information detection device includes: an acquisition unit that acquires an image imaged by an imaging device that is provided in an interior space of a vehicle to image a passenger seated on a seat and a detection value of a load sensor provided on the seat; a first calculation unit that calculates first information that is information on a face of the passenger from the image; and a second calculation unit that calculates second information that is information on a body size of the passenger based on the first information and the detection value. 1. A passenger information detection device comprising:an acquisition unit that acquires an image imaged by an imaging device that is provided in an interior space of a vehicle to image a passenger seated on a seat and a detection value of a load sensor provided on the seat;a first calculation unit that calculates first information that is information on a face of the passenger from the image; anda second calculation unit that calculates second information that is information on a body size of the passenger based on the first information and the detection value.2. The passenger information detection device according to claim 1 ,wherein the first information includes a position of the face, andthe second calculation unit calculates a center of gravity of a distributed load acting on the seat based on the position of the face, and corrects the detection value based on a relationship between the center of gravity and a position of the load sensor.3. The passenger information detection device according to claim 2 ,wherein the first information further includes a tilt of the face, andthe second calculation unit calculates a sitting height of the passenger based on the position of the face and the tilt of the face, and calculates the center of gravity based on the sitting height and the tilt of the face.4. The passenger information detection device according to claim 3 ,wherein the load sensor includes a first load ...

Подробнее
16-05-2019 дата публикации

METHODS AND SYSTEMS FOR DISPLAYING A KARAOKE INTERFACE

Номер: US20190147841A1
Принадлежит:

Exemplary embodiments relate to applications for facial detection technology and facial overlays to provide a karaoke experience. For example, an identifier associated with a celebrity or singer may be mapped to an image or facial overlay, and to a set of predefined music tracks configured for karaoke. In some embodiments, the music tracks may include metadata with lyrics or other karaoke information. The music tracks may also be mapped to media elements, which may be interactive. The karaoke experience may be gamified, such as by performing a sound analysis to determine how close a user's performance is to the lyrics or pitch of the original singer. The song may be performed in a live video, and a leaderboard may be used to track performance across multiple users. The leaderboard score for each user may be partially based on engagement of a user base with the live broadcast. 1. A method , comprising:retrieving a karaoke element, the karaoke element comprising a music track configured to be performed in a karaoke performance and a facial overlay graphic;accessing a video recording of the karaoke performance, the video recording comprising a face of a user performing the music track; andgenerating a karaoke interface, the karaoke interface comprising the facial overlay graphic applied over the face of the user;wherein the facial overlay graphic replaces the face of the user with another facial image.2. The method of claim 1 , the karaoke element further comprising lyrics to the music track claim 1 , the karaoke interface further displaying the lyrics.3. The method of claim 1 , the karaoke element further comprising metadata associated with the music track claim 1 , the karaoke interface further displaying the metadata.4. The method of claim 1 , further comprising:receiving the user's performance of the music track on an audio input device;performing a sound analysis to determine a quality of the performance as compared to at least one of lyrics of the music track or ...

Подробнее