Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 413. Отображено 100.
01-01-2015 дата публикации

Gesture recognition apparatus using vehicle steering wheel, and method for recognizing hand

Номер: US20150003682A1
Принадлежит: Honda Access Corp, NIPPON SYSTEMWARE CO LTD

A gesture recognition apparatus capable of recognizing a hand by means of a binary image regardless of direction of light contacting the hand. A binarizing processing section binarizes an input image from a camera to a hand recognizing section to prepare a first binary image by a predetermined method. A rebinarizing processing section only rebinarizes a predetermined area of the input image to prepare a second binary image. A contraction processing section performs a contraction processing on the second binary image. The rebinarizing processing can increase the possibility of recognizing the hand by classifying a portion of the hand that was classified into black in the first binary image into white. The hand is determined to be recognizable if the hand can be recognized in the first binary image and/or in the second binary image.

Подробнее
05-01-2017 дата публикации

VIRTUAL REALITY SYSTEM WITH CONTROL COMMAND GESTURES

Номер: US20170003750A1
Автор: Li Adam

A virtual reality system that uses gestures to obtain commands from a user. Embodiments may use sensors mounted on a virtual reality headset to detect head movements, and may recognize selected head motions as gestures associated with commands. Commands associated with gestures may modify the user's virtual reality experience, for example by selecting or modifying a virtual world or by altering the user's viewpoint within the virtual world. Embodiments may define specific gestures to place the system into command mode or user input mode, for example to temporarily disable normal head tracking within the virtual environment. Embodiments may also recognize gestures of other body parts, such as wrist movements measured by a smart watch. 1. A virtual reality system with control command gestures , comprising:at least one display viewable by a user;at least one sensor that generates sensor data that measures one or more aspects of a pose of one or more body parts of said user;a pose analyzer coupled to said at least one sensor, that calculates pose data of said pose of one or more body parts of said user, based on said sensor data generated by said at least one sensor;a control state;one or more control commands, each configured to modify said control state when executed, each associated with one or more gestures of one or more of said one or more body parts of said user; receives said pose data from said pose analyzer;', 'determines whether said user has performed a gesture associated with a control command; and,', 'executes said control command to modify said control state when said user has performed said gesture associated with said control command;, 'a gesture recognizer coupled to said pose analyzer and to said one or more control commands, wherein said gesture recognizer'}a 3D model of a scene; and, optionally modifies or selects said 3D model of a scene based on said control state;', 'receives said pose data from said pose analyzer;', 'calculates one or more ...

Подробнее
07-01-2016 дата публикации

SHAPE RECOGNITION DEVICE, SHAPE RECOGNITION PROGRAM, AND SHAPE RECOGNITION METHOD

Номер: US20160004908A1
Автор: Lundberg Johannes
Принадлежит: BRILLIANTSERVICE CO., LTD.

Provided are a shape recognition device, a shape recognition program, and a shape recognition method capable of obtaining more accurate information for recognizing an outer shape of a target object. A shape recognition device according to the present invention includes: an outer shape detection unit that detects an outer shape of a hand; an extraction point setting unit that sets a plurality of points inside of the detected outer shape as extraction points; a depth level detection unit that measures respective spatial distances to points on a surface of the hand as depth levels, the points respectively corresponding to the plurality of extraction points; and a hand orientation recognition unit that determines which of a palmar side and a back side the hand shows, on the basis of a criterion for fluctuations in the measured depth levels. 1. A shape recognition device comprising:an outer shape detection unit that detects an outer shape of a hand;an extraction point setting unit that sets a plurality of points inside of the detected outer shape as extraction points;a depth level detection unit that measures respective spatial distances to target points on a surface of the hand as depth levels, the target points respectively corresponding to the plurality of extraction points; anda hand orientation recognition unit that determines which of a palmar side and a back side the hand shows, on the basis of a criterion for fluctuations in the measured depth levels.2. The shape recognition device according to claim 1 , further comprising a reference point extraction unit that extracts claim 1 , from the detected outer shape claim 1 , a central point of a maximum inscribed circle of the outer shape as a reference point claim 1 , whereinthe extraction point setting unit sets a chord of the maximum inscribed circle such that the chord passes through the reference point, and sets the plurality of extraction points at predetermined intervals onto the set chord.3. The shape ...

Подробнее
07-01-2016 дата публикации

IMAGE PROCESSOR WITH EVALUATION LAYER IMPLEMENTING SOFTWARE AND HARDWARE ALGORITHMS OF DIFFERENT PRECISION

Номер: US20160004919A1
Принадлежит:

An image processor comprises image processing circuitry implementing a plurality of processing layers including at least an evaluation layer and a recognition layer. The evaluation layer comprises a software-implemented portion and a hardware-implemented portion, with the software-implemented portion of the evaluation layer being configured to generate first object data of a first precision level using a software algorithm, and the hardware-implemented portion of the evaluation layer being configured to eV generate second object data of a second precision level lower than the first precision level using a hardware algorithm. The evaluation layer further comprises a signal combiner configured to combine the first and second object data to generate output object data for delivery to the recognition layer. By way of example only, the evaluation layer may be implemented in the form of an evaluation subsystem of a gesture recognition system of the image processor. 1. An image processor comprising:image processing circuitry implementing a plurality of processing layers including at least an evaluation layer and a recognition layer;the evaluation layer comprising a software-implemented portion and a hardware-implemented portion;the software-implemented portion of the evaluation layer being configured to generate first object data of a first precision level using a software algorithm;the hardware-implemented portion of the evaluation layer being configured to generate second object data of a second precision level lower than the first precision level using a-hardware algorithm;wherein the evaluation layer further comprises a signal combiner configured to combine the first and second object data to generate output object data for delivery to the recognition layer.2. The image processor of wherein the evaluation layer comprises an evaluation subsystem of a gesture recognition system.3. The image processor of wherein the plurality of processing layers further comprises a ...

Подробнее
07-01-2021 дата публикации

Wearable Electronic Device Having a Light Field Camera

Номер: US20210004444A1
Принадлежит:

A method of authenticating a user of a wearable electronic device includes emitting light into a dorsal side of a forearm near a wrist of the user; receiving, using a light field camera, remissions of the light from the dorsal side of the forearm near the wrist of the user; generating a light field image from the remissions of the light; performing a synthetic focusing operation on the light field image to construct at least one image of at least one layer of the forearm near the wrist; extracting a set of features from the at least one image; determining whether the set of features matches a reference set of features; and authenticating the user based on the matching. In some embodiments, the method may further include compensating for a tilt of the light field camera prior to or while performing the synthetic focusing operation. 1. A watch body , comprising:a housing; a first surface exterior to the watch body; and', 'a second surface interior to the watch body;, 'a cover mounted to the housing, the cover havinga light emitter positioned to emit light through the cover into a dorsal side of a forearm near a wrist of a user when the first surface of the cover is positioned adjacent the dorsal side of the forearm near the wrist of the user;a light field camera positioned adjacent the second surface to receive remissions of the light through the cover from the dorsal side of the forearm near the wrist; anda processor configured to operate the light emitter and the light field camera, obtain a light field image from the light field camera, and perform a synthetic focusing operation on the light field image to construct at least one image of at least one layer of the forearm near the wrist.2. The watch body of claim 1 , wherein the light field camera comprises:an array of non-overlapping image sensing regions;a pinhole mask positioned between the array of non-overlapping image sensing regions and the second surface of the cover; anda spacer between the array of non- ...

Подробнее
04-01-2018 дата публикации

OBJECT MODELING AND REPLACEMENT IN A VIDEO STREAM

Номер: US20180005026A1
Принадлежит:

Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest. 1. A method , comprising:receiving, by one or more processors, one or more images depicting at least a portion of a hand;determining pixels within the one or more images corresponding to the portion of the hand in a predetermined portion of a field of view of an image capture device, the portion of the hand having a finger;based on the pixels corresponding to the portion of the hand, identifying a finger position of the finger;determining a direction of the finger based on the finger position;based on the direction of the finger, dynamically modifying a histogram threshold for identifying pixels as corresponding to the portion of the hand; andreplacing the portion of the hand and the finger with a graphical interface element aligned with the direction of the finger.2. The method of claim 1 , wherein identifying the finger position further comprises:forming a convex polygon encompassing at least a part of the portion f the hand; andidentifying one or more defects within the convex polygon, a defect indicating a space between two fingers located on the portion of the hand.3. The method of claim 1 , wherein determining the direction of the finger further comprises:identifying a tip of the finger, the tip ...

Подробнее
02-01-2020 дата публикации

DISPLAY CONTROL SYSTEM AND RECORDING MEDIUM

Номер: US20200005099A1
Принадлежит:

There is provided a display control system including a plurality of display units, an imaging unit configured to capture a subject, a predictor configured to predict an action of the subject according to a captured image captured by the imaging unit, a guide image generator configured to generate a guide image that guides the subject according to a prediction result from the predictor, and a display controller configured to, on the basis of the prediction result from the predictor, select a display unit capable of displaying an image at a position corresponding to the subject from the plurality of display units, and to control the selected display unit to display the guide image at the position corresponding to the subject. 1. (canceled)2. A system comprising:an action history information acquirer for acquiring information of user motion around a table;a guide image generator for generating one or more images suggesting an action of the user based on the acquired information; anda display controller for controlling display of the one or more images on the table.3. The system according to claim 2 , further comprising a predictor for generating one or more predicted actions of the user according the information of user motion.4. The system according to claim 3 , wherein the guide image generator generates the one or more images based on claim 3 , at least claim 3 , one of the predicted actions.5. The system according to claim 2 , further comprising a learning unit for learning one or more patterns of items placed on the table claim 2 , and wherein the guide image generator generates the one or more images based on claim 2 , at least claim 2 , one of the patterns.6. The system according to claim 5 , wherein the one or more patterns comprises a pattern of dishes.7. The system according to claim 5 , wherein the one or more patterns comprises a pattern of cutlery.8. The system according to claim 2 , further comprising one or more imaging units for generating the ...

Подробнее
02-01-2020 дата публикации

Systems and Methods for Authenticating a User According to a Hand of the User Moving in a Three-Dimensional (3D) Space

Номер: US20200005530A1
Автор: Holz David
Принадлежит: Ultrahaptics IP Two Limited

Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. 1. A system for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space , the system comprising: analyzing a sequence of images including the hand of the user moving in the 3D space, as captured by a camera from a particular vantage point, to (i) computationally determine a shape of the hand of the user according to one or more mathematically represented 3D surfaces of the hand and (ii) computationally determine a jitter pattern of the hand; and', 'in response to a received authentication determination obtained by performing a comparison of the shape of the hand and the jitter pattern of the hand to a database of hand shapes and jitter patterns, authenticating the user and granting access to the user when the authentication determination indicates that the user is authorized and denying access to the user when the authentication determination indicates that the user is not authorized., 'one or more processors coupled to a memory storing instructions that, when executed by the one or more processors, implement actions including2. The system of claim 1 , further including: at least one source that casts an output onto a portion of the hand of the user.3. The system of claim 1 , further including transmitting to at least one further process claim 1 , a signal that includes at least one selected from (i) trajectory information determined from a reconstructed position of a portion of the hand of the user that the at least one further process interprets claim 1 , and (ii) gesture information interpreted from trajectory information for the portion of the hand of the user.4. The system of claim 1 , further comprising a time-of-flight camera claim 1 , and wherein a plurality of ...

Подробнее
08-01-2015 дата публикации

Gesture recognizer system architecture

Номер: US20150009135A1
Принадлежит: Microsoft Technology Licensing LLC

Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data.

Подробнее
08-01-2015 дата публикации

BIOMETRIC AUTHENTICATION APPARATUS, BIOMETRIC AUTHENTICATION METHOD, AND COMPUTER PROGRAM FOR BIOMETRIC AUTHENTICATION

Номер: US20150010215A1
Принадлежит:

A biometric authentication apparatus includes a storage unit which stores first shape data representing a shape of biometric information of a registered user's hand with fingers at a first posture and second shape data representing a shape of biometric information of the hand with the fingers at a second posture; a posture specification unit which calculates an index representing a third posture of fingers of a user's hand in a biometric image; a biometric information extraction unit which generates third shape data representing a shape of biometric information of the user's hand in the biometric image; and a correction unit which obtains corrected shape data by correcting the first or the second shape data to cancel a shape difference of the biometric information due to a difference between the third posture and the first or the second posture based on the index for matching. 1. A biometric authentication apparatus comprising:a storage unit which stores first shape data representing a shape of biometric information of a hand of a registered user in a state in which fingers of the hand take a first posture, a first index representing the first posture, second shape data representing a shape of biometric information of the hand of the registered user in a state in which the fingers of the hand take a second posture, and a second index representing the second posture;a biometric information acquisition unit which generates a biometric image representing biometric information of a hand of a user;a posture specification unit which calculates a third index representing a third posture of fingers of the hand of the user captured in the biometric image from the biometric image;a biometric information extraction unit which generates third shape data representing a shape of the biometric information of the hand of the user captured in the biometric image based on the biometric image;a correction unit which obtains corrected shape data by correcting the first shape data or ...

Подробнее
14-01-2021 дата публикации

Method and device for measuring biometric information in electronic device

Номер: US20210012130A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Disclosed in various embodiments of the present invention are a method and a device for measuring a user's biometric information in an electronic device and providing information related to the biometric information. An electronic device according to various embodiments of the present invention comprises a sensor module, a camera module, a display device, and a processor, wherein the processor can be configured to: execute an application; acquire a user's first biometric information on the basis of the sensor module while the operation relating to the application is performed; estimate a user's health information at least one the basis of the first biometric information, and link the health information with the operation relating to the application so as to display same through the display device. Various embodiments are possible.

Подробнее
21-01-2016 дата публикации

Gesture Recognition in Vehicles

Номер: US20160018904A1
Автор: El Dokor Tarek
Принадлежит:

A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.

Подробнее
17-01-2019 дата публикации

Non-tactile interface systems and methods

Номер: US20190018495A1
Принадлежит: Leap Motion Inc

Methods and systems for processing an input are disclosed that detect a portion of a hand and/or other detectable object in a region of space monitored by a 3D sensor. The method further includes determining a zone corresponding to the region of space in which the portion of the hand or other detectable object was detected. Also, the method can include determining from the zone a correct way to interpret inputs made by a position, shape or a motion of the portion of the hand or other detectable object.

Подробнее
17-01-2019 дата публикации

SIGN LANGUAGE METHOD USING CLUSTERING

Номер: US20190019018A1

A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures. 1: A computer-implemented method of recognizing sign language , the method comprising:detecting, via circuitry, one or more interest points in an extracted sign language feature, wherein the one or more interest points are localized in space and time in each of a plurality of images acquired from a plurality of frames of a sign language video including the extracted sign language feature, wherein the detecting is carried out using a Scale Invariant Features Transform (SIFT) descriptor and the interest points represent corners in each image;applying a digital filter to determine one or more extrema of a central region of the one or more interest points;associating one or more features with each interest point of the one or more interest points using a neighboring pixel function;clustering, via the circuitry, a group of extracted sign language features from the plurality of images based on a similarity between the extracted sign language features according to the associating to form from 800 to 1,200 clusters;representing each image of the plurality of images by a histogram of ...

Подробнее
22-01-2015 дата публикации

Gesture recognition method and apparatus based on analysis of multiple candidate boundaries

Номер: US20150023607A1
Принадлежит: LSI Corp

An image processing system comprises an image processor configured to identify a plurality of candidate boundaries in an image, to obtain corresponding modified images for respective ones of the candidate boundaries, to apply a mapping function to each of the modified images to generate a corresponding vector, to determine sets of estimates for respective ones of the vectors relative to designated class parameters, and to select a particular one of the candidate boundaries based on the sets of estimates. The designated class parameters may include sets of class parameters for respective ones of a plurality of classes each corresponding to a different gesture to be recognized. The candidate boundaries may comprise candidate palm boundaries associated with a hand in the image. The image processor may be further configured to select a particular one of the plurality of classes to recognize the corresponding gesture based on the sets of estimates.

Подробнее
26-01-2017 дата публикации

Wearable Camera for Reporting the Time Based on Wrist-Related Trigger

Номер: US20170024612A1
Принадлежит:

A wearable device and method are provided for reporting the time based on a wrist-related trigger. In one implementation, a wearable apparatus for providing time information to a user includes a wearable image sensor configured to capture real-time image data from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to identify in the image data a wrist-related trigger associated with the user. The processing device is also programmed to provide an output to the user, the output including the time information, based on at least the identification of the wrist-related trigger. 1. A wearable apparatus for providing time information to a user , the wearable apparatus comprising:a wearable image sensor configured to capture real-time image data from an environment of the user of the wearable apparatus; and identify in the image data a wrist-related trigger associated with the user; and', 'provide an output to the user, the output including the time information, based on at least the identification of the wrist-related trigger., 'at least one processing device programmed to2. The wearable apparatus of claim 1 , wherein the wrist-related trigger includes identification of at least a portion of a wrist region of the user.3. The wearable apparatus of claim 2 , wherein the at least one processing device is further programmed to determine that the wrist-related trigger is associated with the user based on at least a threshold amount of space that the portion of the wrist region occupies in at least one image of the image data.4. The wearable apparatus of claim 3 , wherein the threshold amount of space that the wrist region occupies is at least 10 percent of the at least one image.5. The wearable apparatus of claim 3 , wherein the threshold amount of space that the wrist region occupies is at least 20 percent of the at least one image.6. The wearable apparatus of claim 1 , wherein the at least one ...

Подробнее
25-01-2018 дата публикации

METHOD AND SYSTEM FOR 3D HAND SKELETON TRACKING

Номер: US20180024641A1
Принадлежит:

A tracking system is disclosed. The system may comprise a processor and a non-transitory computer-readable storage medium coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform a method. The method may comprise training a detection model and an extraction model, capturing one or more images of at least a portion of an object, detecting the portion of the object in each of the one or more images through the trained detection model, tracking the detected portion of the object in real-time, obtaining 2D positions of one or more locations on the tracked portion of the object through the trained extraction model, and obtaining 3D positions of the one or more locations on the tracked portion of the object based at least in part on the obtained 2D positions. 1. A tracking system , comprising:a processor; and training a detection model and an extraction model;', 'capturing one or more images of at least a portion of an object;', 'detecting the portion of the object in each of the one or more images through the trained detection model;', 'tracking the detected portion of the object in real-time;', 'obtaining 2D positions of one or more locations on the tracked portion of the object through the trained extraction model; and', 'obtaining 3D positions of the one or more locations on the tracked portion of the object based at least in part on the obtained 2D positions., 'a non-transitory computer-readable storage medium coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform2. The tracking system of claim 1 , wherein:the one or more images comprise two stereo images of the portion of the object; andthe system further comprises two infrared cameras configured to capture the two stereo images.3. The tracking system of claim 1 , wherein:the portion of the object comprises a hand; andthe one or more locations comprise one or more joints of the hand.4. The tracking ...

Подробнее
28-01-2016 дата публикации

DISPLAY DEVICE AND METHOD FOR CONTROLLING THE SAME

Номер: US20160029014A1
Автор: Kim Jihwan, PARK Sihwa
Принадлежит:

A display device and a method for controlling the same are disclosed. The method for controlling a display device comprises a display unit configured to display visual information, including a private region and a public region; a control input sensing unit configured to detect a control input and to deliver the detected control input to a processor; and the processor configured to control the display unit and the control input sensing unit. In this case, the processor may display a control object in the private region, detect a first control input, move the control object from the private region to a first position of the public region based on the detected first control input and display a control indicator corresponding to the control object in a second position of the private region. In this case, the second position may be set based on the first position of the control object. 1. A display device comprising:a display unit configured to display visual information, wherein the display unit includes a private region and a public region;a control input sensing unit configured to detect a control input and to deliver the detected control input to a processor; andthe processor configured to control the display unit and the control input sensing unit,wherein the processor is further configured to:display a control object in the private region,detect a first control input,move the control object from the private region to a first position of the public region based on the detected first control input, anddisplay a control indicator corresponding to the control object in a second position of the private region,wherein the second position is determined based on the first position of the control object.2. The display device according to claim 1 , wherein the first position is determined based on at least one of a moving direction and a moving speed of the detected first control input.3. The display device according to claim 2 , wherein the first position is determined to ...

Подробнее
24-01-2019 дата публикации

AUTOMATED SIGN LANGUAGE RECOGNITION METHOD

Номер: US20190026546A1

A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures. 1: Automated sign language recognition method the method comprising:detecting, via circuitry, one or more interest points in an extracted sign language feature, wherein the one or more interest points are localized in space and time in each of a plurality of images acquired from a plurality of frames of a sign language video including the extracted sign language feature, wherein the detecting is carried out using a Scale Invariant Features Transform (SIFT) descriptor and the interest points represent comers in each image;applying a digital filter to determine one or more extrema of a central region of the one or more interest points;associating one or more features with each interest point of the one or more interest points using a neighboring pixel function;clustering, via the circuitry, a group of extracted sign language features from the plurality of images based on a similarity between the extracted sign language features according to the associating;representing each image of the plurality of images by a histogram of visual words corresponding to the respective image to ...

Подробнее
31-01-2019 дата публикации

Systems and Methods of Tracking Moving Hands and Recognizing Gestural Interactions

Номер: US20190033975A1
Принадлежит: Leap Motion, Inc.

The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual. 1. A method of determining command input to a machine responsive to control object gestures in three dimensional (3D) sensory space , the method comprising:{'b': '0', 'configuring a 3D model representing a control object by fitting one or more 3D capsules to observation information based on an image captured at time t of gestural motion of the control object in three dimensional (3D) sensory space;'}{'b': '1', 'claim-text': [{'b': 1', '0, 'claim-text': pairing point sets from points on the observation information with points on the 3D capsules, wherein normal vectors to points on the observation information are parallel to normal vectors to points on the 3D capsules; and', 'determining the variance comprising a reduced root mean squared deviation (RMSD) of distances between paired point sets; and, 'determining variance between a point on another set of observation information based on the image captured at time t and a corresponding point on at least one of the 3D capsules fitted to the observation information based on the image captured at time t by, 'responsive to the variance adjusting the 3D capsules and determining a gesture performed by the control object based on the 3D capsules as adjusted; and', 'interpreting the gesture as providing command input to a machine., 'responsive to modifications in the observation information based on another image captured at time t, improving alignment of the 3D capsules to the observation ...

Подробнее
01-02-2018 дата публикации

INFANT MONITORING SYSTEM

Номер: US20180035082A1
Автор: Patil Radhika
Принадлежит:

A monitoring system includes a sensor, a processing block and an alert generator. The sensor is used to generate images of an infant. The processing block processes one or more of the images and identifies a condition with respect to the infant. Then alert generator causes generation of an alert signal if the identified condition warrants external notification. In an embodiment, the sensor is a 3D camera, and the 3D camera, the processing block and the alert generator are part of a unit placed in the vicinity of the infant. The monitoring system further includes microphones and motion sensors to enable detection of sounds and movement. 1. A monitoring system comprising:a camera to generate images of an infant;a processing block to process one or more of said images and identify a condition with respect to said infant; andan alert generator to cause generation of an alert signal if said identified condition warrants external notification,wherein said sensor, said processing block and said alert generator are part of a unit placed in the vicinity of said infant.2. The monitoring system of claim 1 , wherein said camera comprises:a 3D (three dimensional) camera to generate said one or more images as one or more 3D (three-dimensional) images of said infant located in a field-of-view(FoV) of said 3D camera,wherein said infant is placed on a stable surface within said FoV.3. The monitoring system of claim 2 , further comprising:a motion sensor to generate motion signals representative of motion of said infant, and to generate filtered motion signals by filtering out the components of said motion signals that are due to motion of said surface on which said infant is placed.4. The monitoring system of claim 2 , wherein said condition is a posture of said infant.6. The monitoring system of claim 5 , wherein said extracting skeletal information claim 5 , determining body part of interest and determining said posture from said body parts of interest employ machine learning ...

Подробнее
31-01-2019 дата публикации

SYSTEM AND METHOD FOR DETECTING HAND GESTURES IN A 3D SPACE

Номер: US20190034714A1
Принадлежит:

A system for detecting hand gestures in a 3D space comprises a 3D imaging unit. The processing unit generates a foreground map of the at least one 3D image by segmenting foreground from background and a 3D sub-image of the at least one 3D image that includes the image of a hand by scaling a 2D intensity image, a depth map and a foreground map of the at least one 3D image such that the 3D sub-image has a predetermined size and by rotating the 2D intensity image, the depth map and the foreground map of the at least one 3D image such that a principal axis of the hand is aligned to a predetermined axis in the 3D sub-image. Classifying a 3D image comprises distinguishing the hand in the 2D intensity image of the 3D sub-image from other body parts and other objects and/or verifying whether the hand has a configuration from a predetermined configuration catalogue. Further, the processing unit uses a convolutional neural network for the classification of the at least one 3D image. 1. A system for detecting hand gestures in a 3D space , comprising:a 3D imaging unit configured to capture 3D images of a scene, wherein each of the 3D images comprises a 2D intensity image and a depth map of the scene, anda processing unit coupled to the 3D imaging unit, wherein the processing unit is configured toreceive the 3D images from the 3D imaging unit,use at least one of the 3D images to classify the at least one 3D image, anddetect a hand gesture in the 3D images based on the classification of the at least one 3D image,wherein the processing unit is further configured to generate a foreground map of the at least one 3D image by segmenting foreground from background,wherein the processing unit is further configured to generate a 3D sub-image of the at least one 3D image that includes the image of a hand,wherein the processing unit is further configured to generate the 3D sub-image by scaling the 2D intensity image, the depth map and the foreground map of the at least one 3D image such ...

Подробнее
30-01-2020 дата публикации

Systems and Methods of Tracking Moving Hands and Recognizing Gestural Interactions

Номер: US20200033951A1
Принадлежит:

The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual. 1. A method of determining command input to a machine responsive to gestures in three dimensional (3D) sensory space , the method comprising: [{'b': 1', '0, 'claim-text': pairing point sets from points on a surface of the observation information with points on the 3D capsules, wherein normal vectors to points on the set of observation information are parallel to normal vectors to points on the 3D capsules; and', 'determining the variance comprising a reduced root mean squared deviation (RMSD) of distances between paired point sets; and, 'determining a variance between a point on a set of observation information based on an image captured at time t and a corresponding point on at least one of the 3D capsules fitted to another set of observation information based on an image captured at time t by, 'responsive to the variance, adjusting the 3D capsules; and, 'aligning 3D capsules to observation information based on images captured of gestural motion made by at least a portion of a hand in three dimensional (3D) sensory space bydetermining a gesture performed by the at least a portion of a hand based on the 3D capsules as adjusted; andinterpreting the gesture as providing command input to a machine.2. The method of claim 1 , wherein adjusting the 3D capsules further includes improving conformance of the 3D capsules to at least one of length claim 1 , width claim 1 , orientation claim 1 , and arrangement of portions of the observation ...

Подробнее
30-01-2020 дата публикации

SYSTEMS AND METHODS TO USE IMAGE DATA TO PERFORM AUTHENTICATION

Номер: US20200034610A1
Принадлежит:

Image data from two different devices is used to identify a physical interaction between two users to authenticate a digital interaction between the users. 1. A device , comprising:at least one processor;at least one computer storage with instructions executable by the at least one processor to:receive at least a first image from a first camera and receive at least a second image from a second camera;receive time-related metadata for the first image and the second image;based on the first and second images and based on the time-related metadata, identify a gesture performed between a first user and a second user; andperform authentication based on the identification of the gesture.2. The device of claim 1 , wherein the instructions are executable by the at least one processor to:identify the gesture performed between the first user and the second user at least in part by identifying a gesture indicated in both the first image and the second image and identifying the first and second images as both being generated at a particular time that is indicated in the time-related metadata.3. The device of claim 1 , wherein the instructions are executable by the at least one processor to:identify the gesture performed between the first user and the second user at least in part by identifying a gesture indicated in both the first image and the second image and identifying the first and second images as both being generated within a threshold time of each other as indicated in the time-related metadata.4. The device of claim 1 , wherein the instructions are executable by the at least one processor to:identify the gesture performed between the first user and the second user at least in part by identifying the first image and the second image as being generated at a same location.5. The device of claim 1 , wherein the instructions are executable by the at least one processor to:identify the gesture performed between the first user and the second user at least in part by ...

Подробнее
11-02-2016 дата публикации

SYSTEMS AND METHODS FOR RECOGNITION AND TRANSLATION OF GESTURES

Номер: US20160042228A1
Автор: Kellard Wade, Opalka Alex
Принадлежит:

A system for recognizing hand gestures, comprising a gesture database configured to store information related to a plurality of gestures; a recognition controller configured to capture data related to a hand gesture being performed by a user; a recognition module configured to: determine hand characteristic information from the captured data, determine finger characteristic information from the captured data, compare the hand and finger characteristic information to the information stored in the database to determine a most likely gesture, and outputting the determined most likely gesture. 1. A system for recognizing a hand gesture , comprising:a gesture database configured to store information related to a plurality of gestures;a recognition controller configured to capture data related to a hand gesture being performed by a user; determine hand characteristic information from the captured data,', 'determine finger characteristic information from the captured data,', 'compare the hand and finger characteristic information to the information stored in the database to determine a most likely gesture, and', 'outputting the determined most likely gesture., 'a recognition module configured to2. The system of claim 1 , wherein the gesture corresponds to a sign language number.3. The system of claim 1 , wherein the gesture corresponds to a sign language letter.4. The system of claim 1 , wherein the gesture corresponds to a sign language sign.5. The system of claim 1 , wherein the recognition module is configured to determine hand characteristic information by checking the number of hands present in the captured data claim 1 , checking a palm visible time for each hand present in the captured data claim 1 , checking the palm position for each visible palm claim 1 , and checking a palm velocity for each visible palm.6. The system of claim 1 , wherein the recognition module is configured to determine finger characteristics by checking a number of fingers present in the ...

Подробнее
11-02-2016 дата публикации

DETECTING APPARATUS, DETECTING METHOD AND COMPUTER READABLE RECORDING MEDIUM RECORDING PROGRAM FOR DETECTING STATE IN PREDETERMINED AREA WITHIN IMAGES

Номер: US20160044222A1
Принадлежит: CASIO COMPUTER CO., LTD.

An imaging apparatus of an embodiment of the present invention includes a detecting unit for detecting a state in a detection area T within an image displayed in a display panel an identifying unit for identifying a subject from the image, an acquiring unit for acquiring information relating to a predetermined subject in the case that the identifying unit identifies the predetermined subject outside the detection area T, and a control unit (the detecting unit ) for controlling detection of the state in the detection area T based on the information relating to the predetermined subject acquired by the acquiring unit 1. A detecting apparatus comprising:a detecting section configured to detect a state in a predetermined area within an image;an identifying section configured to identify a subject from the image;an acquiring section configured to acquire information relating to a predetermined subject in the case that the identifying section identifies the predetermined subject outside the predetermined area; anda control section configured to control detection of the state in the predetermined area by the detecting section based on the information relating to the predetermined subject acquired by the acquiring section.2. The detecting apparatus of claim 1 , wherein the detecting section is operable to detect a state of an object at a specific distance in the predetermined area provided at a predetermined position within the image claim 1 ,the acquiring section acquires focus distance information relating to a focus distance of the predetermined subject as the information relating to the predetermined subject, andthe control section controls the detecting section to detect a state of an object within a predetermined distance from the focus distance of the predetermined subject in the predetermined area within the image based on the focus distance information acquired by the acquiring section.3. The detecting apparatus of claim 1 , wherein the detecting section is ...

Подробнее
24-02-2022 дата публикации

BIOMETRIC USER AUTHENTICATION

Номер: US20220058248A1
Принадлежит:

Embodiments herein disclose computer-implemented methods, computer program products and computer systems for authenticating a user. The computer-implemented method may include receiving biographical data corresponding to a user. A change rate may be determined based on user biographical data. The computer-implemented method may include receiving first biometric data having a time-varying characteristic from the user at a first time and receiving second biometric data having the time-varying characteristic from the user at a second time that is later in time than the first time. Further, the computer-implemented method may include determining third biometric data based at least on the first biometric data, the second time, and the time-varying characteristic, and authenticating the user if the third biometric data is within a predetermined threshold of the second biometric data at the second time. 1. A computer-implemented method for authenticating a user , the computer-implemented method comprising:receiving, by one or more processors, first biometric data comprising a time-varying characteristic at a first time, wherein the first biometric data is associated with a user;receiving, by the one or more processors, second biometric data comprising the time-varying characteristic at a second time that is later in time than the first time, wherein the second biometric data is associated with the user;determining, by the one or more processors, third biometric data based at least on the first biometric data, the second time, and the time-varying characteristic; andauthenticating, by the one or more processors, the user if the third biometric data is within a predetermined threshold of the second biometric data.2. The computer-implemented method of claim 1 , wherein determining the third biometric data is further based on biographical data corresponding to the user claim 1 , wherein the biographical data comprises data corresponding to at least one of age claim 1 , gender ...

Подробнее
06-02-2020 дата публикации

Multi Media Computing Or Entertainment System For Responding To User Presence And Activity

Номер: US20200042096A1
Принадлежит:

Intelligent systems are disclosed that respond to user intent and desires based upon activity that may or may not be expressly directed at the intelligent system. In some embodiments, the intelligent system acquires a depth image of a scene surrounding the system. A scene geometry may be extracted from the depth image and elements of the scene may be monitored. In certain embodiments, user activity in the scene is monitored and analyzed to infer user desires or intent with respect to the system. The interpretation of the user's intent as well as the system's response may be affected by the scene geometry surrounding the user and/or the system. In some embodiments, techniques and systems are disclosed for interpreting express user communication, e.g., expressed through hand gesture movements. In some embodiments, such gesture movements may be interpreted based on real-time depth information obtained from, e.g., optical or non-optical type depth sensors. 1. A non-transitory program storage device , readable by a processor and comprising instructions stored thereon to cause one or more processors to:acquire a depth image of a scene in a vicinity of a first device;store the depth image in a memory;develop a scene geometry based upon the depth image;monitor the activity of one or more humans present in the scene geometry, wherein one of the one or more humans comprises a user of the first device;determine whether the user is engaged in conversation with at least one of the one or more humans; andin response to a determination that the user is engaged in conversation with the at least one of the one or more humans, adjust an output of the first device based, at least in part, upon a characteristic of the determined conversation.2. The non-transitory program storage device of claim 1 , wherein the output of the first device comprises an audio output.3. The non-transitory program storage device of claim 1 , wherein the characteristic of the determined conversation comprises ...

Подробнее
18-02-2016 дата публикации

METHOD AND SYSTEM FOR RECOGNIZING AN OBJECT

Номер: US20160048727A1
Принадлежит: KONICA MINOLTA LABORATORY U.S.A., INC.

A method, a system, and a non-transitory computer readable medium for recognizing an object are disclosed, the method including: emitting an array of infrared rays from an infrared emitter towards a projection region, the projection region including a first object; generating a reference infrared image by recording an intensity of ray reflection from the projection region without the first object; generating a target infrared image by recording the intensity of ray reflection from the projection region with the first object; comparing the target infrared image to the reference infrared image to generate a predetermined intensity threshold; and extracting the first object from the target infrared image, if the intensity of ray reflection of the target infrared image of the first object exceeds the predetermined intensity threshold. 1. A method for recognizing an object , the method comprising:emitting an array of infrared rays from an infrared emitter towards a projection region, the projection region including a first object;generating a reference infrared image by recording an intensity of ray reflection from the projection region without the first object;generating a target infrared image by recording the intensity of ray reflection from the projection region with the first object;comparing the target infrared image to the reference infrared image to generate a predetermined intensity threshold; andextracting the first object from the target infrared image, if the intensity of ray reflection of the target infrared image of the first object exceeds the predetermined intensity threshold.2. The method of claim 1 , wherein the first object is a hand of a user claim 1 , and wherein if the intensity of ray reflection of the target infrared image of the hand exceeds the predetermined intensity threshold:generating a binarized image of the hand from the infrared image; andcomparing the binarized image of the hand to a model hand to detect and track movement of the one or ...

Подробнее
25-02-2016 дата публикации

USER INTERFACE APPARATUS AND CONTROL METHOD

Номер: US20160054859A1
Автор: Oshima Soshi
Принадлежит:

A three-dimensional image of an operation surface and a region upward thereof is acquired, a hand region is extracted from the three-dimensional image, and the position of a fingertip is specified based on the extracted hand region. A touch on the operation surface is detected based on the operation surface included in the three-dimensional image and the specified position of the fingertip, and if a touch is detected, the direction of the fingertip is specified, and a position obtained by shifting the position of the fingertip by a predetermined amount in the direction opposite to the specified direction of the fingertip is determined as the touch position. 1. A user interface apparatus for specifying an operation performed on an operation surface , comprising:an acquisition unit that acquires a three-dimensional image of a region of the operation surface and a three-dimensional space whose bottom surface is the operation surface;an extraction unit that extracts a hand region from the three-dimensional image;a first specification unit that specifies a position of a fingertip based on the hand region;a detection unit that detects a touch on the operation surface based on the operation surface included in the three-dimensional image and the position of the fingertip;a second specification unit that, in a case where a touch on the operation surface was detected, specifies a direction of the fingertip based on the hand region; anda determination unit that determines, as a touch position, a position obtained by shifting the position of the fingertip by a predetermined amount on the operation surface in a direction opposite to the direction of the fingertip.2. The user interface apparatus according to claim 1 ,wherein the second specification unit specifies the direction of the fingertip based on the hand region projected onto the operation surface, andthe determination unit determines, as the touch position, a position obtained by shifting the position of the fingertip ...

Подробнее
14-02-2019 дата публикации

SUPPORT VECTOR MACHINE ADAPTED SIGN LANGUAGE CLASSIFICATION METHOD

Номер: US20190050637A1

A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures. 1: A computer-implemented method of recognizing sign language , the method comprising:detecting, via circuitry, one or more interest points in an extracted sign language feature, wherein the one or more interest points are localized in space and time in each of a plurality of images acquired from a plurality of frames of a sign language video including the extracted sign language feature, wherein the images include 33 sign primitive postures;applying a digital filter to determine one or more extrema of a central region of the one or more interest points;associating one or more features with each interest point of the one or more interest points using a neighboring pixel function;clustering, via the circuitry, a group of extracted sign language features from the plurality of images based on a similarity between the extracted sign language features according to the associating;representing each image of the plurality of images by a histogram of visual words corresponding to the respective image to generate a code book;training, via the circuitry, a classifier based on labels ...

Подробнее
25-02-2016 дата публикации

IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20160055373A1
Автор: MATSUDA Kouichi
Принадлежит:

There is provided an image processing apparatus including: a communication unit receiving first feature amounts, which include coordinates of feature points in an image acquired by another image processing apparatus, and position data showing a position in the image of a pointer that points at a location in a real space; an input image acquisition unit acquiring an input image by image pickup of the real space; a feature amount generating unit generating second feature amounts including coordinates of feature points set in the acquired input image; a specifying unit comparing the first feature amounts and the second feature amounts and specifying, based on a comparison result and the position data, a position in the input image of the location in the real space being pointed at by the pointer; and an output image generating unit generating an output image displaying an indicator indicating the specified position. 1. A first information processing apparatus comprising:circuitry configured toacquire an image seen by a first user;acquire an object as a pointer, wherein the object is pointed to by a finger of the first user of the first information processing apparatus in the image seen by the first user; andshare the object acquired with a second user of a second information processing apparatus,wherein the second information processing apparatus recognizes the object acquired by the first information processing apparatus in a second image seen by the second user.2. The first information processing apparatus of claim 1 , wherein the circuitry is further configured to recognize the pointer by detecting a finger image appearing in the image seen by the first user.3. The first information processing apparatus of claim 2 , wherein the circuitry is further configured to generate first feature amounts claim 2 , including coordinates of a plurality of feature points claim 2 , set in the image seen by the first user.4. The first information processing apparatus of claim 3 , ...

Подробнее
13-02-2020 дата публикации

MACHINE RESPONSIVENESS TO DYNAMIC USER MOVEMENTS AND GESTURES

Номер: US20200050281A1
Принадлежит: Ultrahaptics IP Two Limited

Methods and systems for processing an input are disclosed that detect a portion of a hand and/or other detectable object in a region of space monitored by a 3D sensor. The method further includes determining a zone corresponding to the region of space in which the portion of the hand or other detectable object was detected. Also, the method can include determining from the zone a correct way to interpret inputs made by a position, shape or a motion of the portion of the hand or other detectable object. 1. A method of interacting with a machine using input gestures , the method comprising:sensing, using a 3D sensor, positional information of one or more fingers of a user in a region of space monitored by the 3D sensor;defining a first user-specific virtual plane in the space according to a sensed position of a first finger of the user;defining a second user-specific virtual plane in the space according to a sensed position of a second finger of the user;detecting, by the 3D sensor, a first finger state of the first finger relative to the first user-specific virtual plane, the first finger state being one of the first finger moving closer to or further away from the first user-specific virtual plane;detecting, by the 3D sensor, a second finger state of the second finger relative to the second user-specific virtual plane;interpreting the first finger state as a first input gesture command to interact with a first functionality of the machine; andinterpreting the second finger state as a second input gesture command to interact with a second functionality of the machine.2. The method of claim 1 , further comprising interpreting the first input gesture command to be a pinch gesture command to zoom-in.3. The method of claim 1 , further comprising interpreting the second input gesture command to be a spreading gesture command to zoom-out.4. The method of claim 1 , further comprising interpreting the first input gesture command to be a pressure gesture command.5. The method ...

Подробнее
14-02-2019 дата публикации

USER RECOGNITION SYSTEM AND METHODS FOR AUTONOMOUS VEHICLES

Номер: US20190051069A1
Автор: Cooley Robert B.
Принадлежит: GM GLOBAL TECHNOLOGY OPERATIONS LLC

Methods and a system are disclosed for providing autonomous driving system functions. The system includes a controller providing functions for automated user recognition in the autonomous vehicle, at least one environmental sensor configured to scan an environment of the autonomous vehicle and to transmit scan data of the environment to a biometric recognition module of the autonomous vehicle, and a biometric recognition module configured to analyze the scan data of the environment based on a gesture recognition algorithm by using a processor. The gesture recognition algorithm analyzes the scan data of the environment based on at least one biometric feature by using the processor. The at least one biometric feature comprises at least a flagging down gesture and the controller is configured to stop the autonomous vehicle at a position relative to the user and to configure the autonomous vehicle to offer to the user the use of the autonomous vehicle. 1. A user recognition system for automated user recognition in an autonomous vehicle , comprising:a controller with at least one processor providing functions for automated user recognition in the autonomous vehicle;at least one environmental sensor configured to scan an environment of the autonomous vehicle and to transmit scan data of the environment to a biometric recognition module of the autonomous vehicle;a biometric recognition module configured to analyze the scan data of the environment based on a gesture recognition algorithm by using the at least one processor;wherein the at least one processor is configured to analyze the scan data of the environment based on at least one biometric feature;wherein the at least one biometric feature comprises at least a flagging down gesture; andwherein, the controller is configured to stop the autonomous vehicle at a position relative to the user and to configure the autonomous vehicle to offer to the user the use of the autonomous vehicle, if the gesture recognition algorithm ...

Подробнее
01-03-2018 дата публикации

INFORMATION PROCESSING DEVICE

Номер: US20180059798A1
Принадлежит: Clarion Co., Ltd.

An in-vehicle device includes a gesture detection unit for recognizing a user's hand at a predetermined range; an information control unit controlling information output to a display; and an in-vehicle device control unit receiving input from a control unit equipped in a vehicle to control the in-vehicle device. When the gesture detection unit detects a user's hand at a predetermined position, the output information control unit triggers the display to display candidates of an operation executed by the in-vehicle device control unit by associating the candidates with the user's hand motions. When the gesture detection unit detects the user's hand, and the user's hand has thereafter moved from the position, the information control unit changes a selection method or a control guide of the operation to be executed by the in-vehicle device control unit, which is displayed on the display, to subject matter which matches the control unit. 1. An in-vehicle device , comprising:a gesture detection unit which recognizes a position of a user's hand located within a predetermined range;an output information control unit which controls output information to be output to a display unit; andan in-vehicle device control unit which receives an input from a control unit equipped in a vehicle and thereby controls the in-vehicle device, wherein:when the gesture detection unit detects that the user's hand has been placed at a predetermined position for a given length of time, the output information control unit triggers the display unit to display candidates of an operation to be executed by the in-vehicle device control unit by associating the candidates with motions of the user's hand; andwhen the gesture detection unit detects that the user's hand has been placed at the predetermined position for a given length of time and the user's hand has thereafter been moved from the predetermined position, the output information control unit changes a selection method or a control guide of the ...

Подробнее
05-03-2015 дата публикации

Communication device and method using editable visual objects

Номер: US20150067558A1

A communication device and method are disclosed. The communication device includes an intention input unit, a visual object processing unit, and a message management unit. The intention input unit receives a user's intention through an interface. The visual object processing unit outputs a recommended visual object related to the user's intention to the interface, and generates the metadata of an edited visual object when the user edits the recommended visual object through the interface. The message management unit sends a message, including the generated metadata of the visual object, to a counterpart terminal.

Подробнее
05-03-2015 дата публикации

Wearable user device authentication system

Номер: US20150067824A1
Принадлежит: Individual

Systems and methods for authenticating a user include a wearable user device receiving a first request to access a secure system. A plurality of authentication elements are then displayed on a display device to a user eye in a first authentication orientation about a perimeter of an authentication element input area. A user hand located opposite the display device from the user eye is then detected selecting a sequence of the plurality of authentication elements. For each selected authentication element in the sequence, the wearable user device moves the selected authentication element based on a detected movement of the user hand and records the selected authentication element as a portion of an authentication input in response to the user hand moving the selected authentication element to the authentication element input area. The user is authenticated for the secure system if the authentication input matches stored user authentication information.

Подробнее
28-02-2019 дата публикации

HAND SEGMENTATION IN A 3-DIMENSIONAL IMAGE

Номер: US20190066300A1
Автор: Bar Zvi Asaf, Viente Kfir
Принадлежит: Intel Corporation

Techniques are provided for segmentation of a hand from a forearm in an image frame. A methodology implementing the techniques according to an embodiment includes estimating a wrist line within an image shape that includes a forearm and a hand. The wrist line estimation is based on a search for a minimum width region of the shape that is surrounded by adjacent regions of greater width on each side of the minimum width region. The method also includes determining a forearm segment, and a hand segment that is separated from the forearm segment by the wrist line. The method further includes labeling the forearm segment and the hand segment. The labeling is based on a connected component analysis of the forearm segment and the hand segment. The method further includes removing the labeled forearm segment from the image frame to generate the image segmentation of the hand. 1. A processor-implemented method for image segmentation of a hand , the method comprising:receiving an image frame including a shape that is representative of a forearm and a hand;estimating, by a processor-based system, a wrist line associated with the shape, the wrist line estimation identifying a minimum width region of the shape that is adjacent to regions of greater width on each side of the minimum width region;identifying, by the processor-based system, a forearm segment and a hand segment, the forearm segment separated from the hand segment by the wrist line; andremoving, by the processor-based system, the identified forearm segment from the image frame, thereby providing image segmentation of the hand.2. The method of claim 1 , wherein the wrist line estimation further comprises performing a contour smoothing operation on the shape to generate a smoothed shape.3. The method of claim 2 , wherein the wrist line estimation further comprises determining a major axis associated with the smoothed shape claim 2 , the determination based on a search for a maximum Euclidean distance between points on ...

Подробнее
27-02-2020 дата публикации

SELF-DRIVING MOBILE ROBOTS USING HUMAN-ROBOT INTERACTIONS

Номер: US20200064827A1
Принадлежит: FORD GLOBAL TECHNOLOGIES, LLC

Systems, methods, and computer-readable media are disclosed for enhanced human-robot interactions. A device such as a robot may send one or more pulses. The device may identify one or more reflections associated with the one or more pulses. The device may determine, based at least in part on the one or more reflections, a cluster. The device may associate the cluster with an object identified in an image. The device may determine, based at least in part on an image analysis of the image, a gesture associated with the object. The device may determine, based at least in part on the gesture, a command associated with an action. The device may to perform the action. 1. A device comprising storage and processing circuitry configured to:determine, based at least in part on one or more reflections associated with one or more pulses, a cluster;associate the cluster with an object identified in an image;determine, based at least in part on an image analysis of the image, a gesture associated with the object;determine, based at least in part on the gesture, a command associated with an action; andcause the device to perform the action.2. The device of claim 1 , wherein the object is associated with a person claim 1 , wherein to determine the gesture comprises the processing circuitry being further configured to:determine a first set of pixel coordinates of the object, wherein the first set of pixel coordinates is associated with a first portion of the person's body;determine a second set of pixel coordinates of the image, wherein the second set of pixel coordinates is associated with a second portion of the person's body;determine, based at least in part on the first set of pixel coordinates and the second set of pixel coordinates, that the person's body is in a pose associated with the gesture.3. The device of claim 2 , wherein the first portion of the person's body is associated with a hand or wrist and the second portion of the person's body is associated with a neck or ...

Подробнее
08-03-2018 дата публикации

BIOMETRIC IMAGE PROCESSING APPARATUS, BIOMETRIC IMAGE PROCESSING METHOD AND STORAGE MEDIUM

Номер: US20180068201A1
Принадлежит: FUJITSU LIMITED

A biometric image processing apparatus including a memory and a processor coupled to the memory. The processor obtains a Y value, a U value and a V value in a YUV space from each pixel of an image, determines, for each pixel, whether or not the U value and the V value are in a range that is in accordance with the Y value, and extracts a pixel having been determined to be in the range. 1. A biometric image processing apparatus comprising:a memory; anda processor coupled to the memory and configured to obtain a Y value, a U value and a V value in a YUV space from each pixel of an image, determine, for each pixel, whether or not the U value and the V value are in a range that is in accordance with the Y value, and extract a pixel having been determined to be in the range.2. The biometric image processing apparatus according to claim 1 , whereinthe processor divides the image into a plurality of regions, and extracts, in each of the plurality of regions, a pixel from the image on a basis of a number of pixels having been determined to be in the range.3. The biometric image processing apparatus according to claim 2 , whereinthe processor extracts, in each of the plurality of regions, a pixel having been determined to be in the range and a pixel having been determined to be not in the range when the number of pixels having been determined to be in the range is greater than the number of pixels having been determined to be not in the range.4. The biometric image processing apparatus according to claim 1 , whereinthe processor extracts a pixel having been determined to be not in the range and enclosed by pixels having been determined to be in the range.5. A biometric image processing method comprising:obtaining, by a processor, a Y value, a U value and a V value in a YUV space from each pixel of an image;determining, by the processor and for each pixel, whether or not the U value and the V value are in a range that is in accordance with the Y value; andextracting, by the ...

Подробнее
09-03-2017 дата публикации

DETECTION OF IMPROPER VIEWING POSTURE

Номер: US20170068313A1
Принадлежит:

Embodiments of the present invention provide efficient and automatic systems and methods for regulating the viewing posture of a user. Embodiments of the present invention can be used to regulate the viewing posture of both juveniles and adults, by providing real-time data analysis of a viewing distance and viewing angle of a device, and generating feedback to a user related to their current viewing posture, while also providing increased supervision of the viewing posture of juvenile device users. 111-. (canceled)12. A computer program product , for regulating viewing posture , the computer program product comprising:a computer readable storage medium and program instructions stored on the computer readable storage medium, the program instructions comprising:program instructions to identify an eye of a device user, based on a set of eye attributes;program instructions to calculate a distance between the eye of the device user and a screen of a device;program instructions to determine whether the distance between the eye of the device user and the screen of the device is below a threshold, wherein the threshold is a predetermined distance, based on a type of the device in use; andprogram instructions to, responsive to determining that the distance between the eye of the device user and the screen of the device is below the threshold, send an alert to the device user.13. The computer program product of claim 12 , further comprising:program instructions to receive information detailing an angle of the device relative to a vantage point;program instructions to determine whether the angle of the device relative to the vantage point is greater than zero degrees and less than 90 degrees; andprogram instructions to, responsive to determining that the angle of the device relative to the vantage point is greater than or equal to zero degrees and less than 90 degrees, send an indication to the device user.14. The computer program product of claim 13 , wherein the program ...

Подробнее
11-03-2021 дата публикации

HAND POSE ESTIMATION FROM STEREO CAMERAS

Номер: US20210074016A1
Принадлежит:

Systems and methods herein describe using a neural network to identify a first set of joint location coordinates and a second set of joint location coordinates and identifying a three-dimensional hand pose based on both the first and second sets of joint location coordinates. 1. A method comprising:receiving, from a camera, a plurality of images of a hand; for each given image in the plurality of images:', 'cropping, using one or more processors, a portion of the given image comprising the hand;', 'identifying, using a neural network, a first set of joint location coordinates in the cropped portion of the given image;', 'generating a second set of joint location coordinates using the first set of joint location coordinates; and, 'generating a plurality of sets of joint location coordinates byidentifying a three-dimensional hand pose based on the plurality of sets of joint location coordinates.2. The method of claim 1 , wherein the plurality of images comprises a plurality of views of the hand.3. The method of claim 1 , further comprising:prompting a user of a client device to initialize a hand position;receiving the initialized hand position; andtracking the hand based on the initialized hand position.4. The method of claim 1 , wherein the camera is a stereo camera.5. The method of claim 1 , wherein the first set of joint location coordinates is measured based on pixel location.6. The method of claim 1 , further comprising:converting the first set of joint location coordinates to a third set of joint location coordinates, wherein the third set of joint location coordinates is measured relative to an uncropped version of the given image; andconverting the third set of joint location coordinates to the second set of joint location coordinates.7. The method of claim 1 , further comprising:generating a synthetic training dataset comprising stereo image pairs of virtual hands and corresponding ground truth labels, wherein the corresponding ground truth labels comprise ...

Подробнее
17-03-2016 дата публикации

SCANNER GESTURE RECOGNITION

Номер: US20160078290A1
Принадлежит:

A scanner having an integrated camera is used to capture gestures made in a field of view of the camera. The captured gestures are translated to scanner commands recognized by the scanner. The scanner executes the recognized commands. 1. A method , comprising:detecting in a field of view of a scanner a hand;capturing an image of the hand as a gesture; andautomatically performing a command on the scanner based on the gesture.2. The method of claim 1 , wherein capturing further includes activating a camera interfaced to the scanner to capture the image.3. The method of claim 1 , wherein capturing further includes activating a video camera interfaced to the scanner to capture the image as a frame of video.4. The method of claim 1 , wherein automatically performing further includes receiving the command from a Point-Of-Sale (POS) system that the gesture was provided to.5. The method of claim 1 , wherein automatically performing further includes matching the gesture to a known gesture that is mapped to the command.6. The method of claim 1 , wherein automatically performing further includes placing the scanner in a configuration mode in response to performing the command.7. The method of further comprising claim 6 , identifying a series of additional gestures in the field of view that direct the scanner to navigate a configuration menu and make selections from the configuration menu to configure the scanner.8. The method of claim 1 , wherein automatically performing further includes placing the scanner in a programming mode in response to performing the command.9. The method of further comprising claim 8 , identifying a series of additional gestures provided by the hand in the field of view claim 8 , the series representing a program to process on the scanner.10. The method of claim 1 , wherein automatically performing further includes performing one or more of claim 1 , in response to the command: changing a setting on the scanner claim 1 , requesting assistance at the ...

Подробнее
16-03-2017 дата публикации

Method of controlling mobile terminal using fingerprint recognition and mobile terminal using the same

Номер: US20170076139A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Provided are a method and an apparatus for providing an efficient user interface(UI) to a user by using fingerprint recognition. A method of controlling a mobile terminal includes: registering a plurality of fingerprint signatures for a fingerprint data-basic; generating fingerprint image data by using a fingerprint recognition module included in the mobile terminal; determining a fingerprint signature that corresponds to the fingerprint image data, from among the plurality of fingerprint signatures; and executing a process corresponding to the determined fingerprint signature.

Подробнее
18-03-2021 дата публикации

Method and System for Hand Pose Detection

Номер: US20210081055A1
Принадлежит:

A method for hand pose identification in an automated system includes providing depth map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model. 1. A system for computer human interaction comprising:a depth camera configured to generate depth map data of a hand of a user;an output device;a memory storing at least a first neural network, and a recommendation engine; and receive depth map data of a hand of a user from the depth camera;', 'generate, using the first neural network to generate a first plurality of activation features base at least in part on the depth map data;', 'perform a first search in a predetermined plurality of activation features stored in a database stored in the memory to identify a first plurality of hand pose parameters for the wrist using nearest neighbor identification;', 'generate a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters; and', 'generate an output with the output device in response to input from the user based at least in part on the hand pose model., 'a processor operatively connected to the depth camera, the output device, and the memory, the processor being configured to2. The system of claim 1 , wherein the processor is further configured to:identify a second ...

Подробнее
24-03-2016 дата публикации

METHODS AND APPARATUS FOR MULTI-FACTOR USER AUTHENTICATION WITH TWO DIMENSIONAL CAMERAS

Номер: US20160085958A1
Автор: Kang Xiaozhu
Принадлежит:

A data processing system (DPS) includes a user authentication module that uses a hand recognition module and a gesture recognition module to authenticate users, based on video data from a two-dimensional (2D) camera. When executed, the hand recognition module performs operations comprising (a) obtaining 2D video data of a hand of the current user; and (b) automatically determining whether the hand of the current user matches the hand of an authorized user, based on the 2D video data. When executed, the gesture recognition module performs operations comprising (a) presenting a gesture challenge to the current user, wherein the gesture challenge asks the current user to perform a predetermined hand gesture; (b) obtaining 2D video response data; and (c) automatically determining whether the current user has performed the predetermined hand gesture, based on the 2D video response data. Other embodiments are described and claimed. 1. A data processing system with features for authenticating a user of the data processing system , the data processing system comprising:a processor;a two-dimensional (2D) camera responsive to the processor;at least one machine accessible medium responsive to the processer; anda user authentication module stored at least partially in the at least one machine accessible medium, wherein the user authentication module comprises a hand recognition module and a gesture recognition module; andwherein the user authentication module, when executed, uses the hand recognition module and the gesture recognition module to determine whether a current user of the data processing system is an authorized user; obtaining 2D video data of a hand of the current user from the camera; and', 'automatically determining whether the hand of the current user matches a hand of the authorized user, based on the 2D video data of the hand of the current user; and, 'wherein the hand recognition module is executable to perform operations comprising presenting a gesture ...

Подробнее
31-03-2022 дата публикации

CAPTURING AND QUANTIFYING BODY POSTURES

Номер: US20220100992A1
Принадлежит:

Disclosed are techniques for quantifying body postures of a player employing a loop drive technique to strike a ball, such as performed in table tennis activities. A video recording of a player striking a ball with a loop drive technique is received and divided, using image processing techniques, into two segments: the first concerning player body postures before the ball is hit, and the second concerning body postures from the moment of impact between the ball and racket and the subsequent follow-through body postures. Then, image processing techniques are again leveraged to isolate and quantify specific body postures contributing to a loop drive technique in a given segment. 1. A computer-implemented method (CIM) comprising:receiving a table tennis video recording data set, with the table tennis video recording data set including a video recording observing a table tennis player and their body as the table tennis player performs a sequence of motions leading up to and inclusive of follow-through motions after hitting a table tennis ball with a table tennis racket;determining, automatically, at least two segments of video recordings from the table tennis video recording data set, including a first segment corresponding to motions leading up to hitting the table tennis ball with the table tennis racket, and a second segment corresponding to hitting the table tennis ball with the table tennis racket and subsequent follow-through motions; anddetermining, automatically, for the first segment, a first set of data points corresponding to quantitative values for changes in body posture of the player as they perform the sequence of motions leading up to hitting the table tennis ball with the table tennis racket.2. The CIM of claim 1 , further comprising:determining, automatically, for the second segment, a second set of data points corresponding to quantitative values for changes in body posture of the player as the player hits the table tennis ball with the table tennis ...

Подробнее
31-03-2022 дата публикации

DATA AUGMENTATION INCLUDING BACKGROUND MODIFICATION FOR ROBUST PREDICTION USING NEURAL NETWORKS

Номер: US20220101047A1
Принадлежит:

In various examples, a background of an object may be modified to generate a training image. A segmentation mask may be generated and used to generate an object image that includes image data representing the object. The object image may be integrated into a different background and used for data augmentation in training a neural network. Data augmentation may also be performed using hue adjustment (e.g., of the object image) and/or rendering three-dimensional capture data that corresponds to the object from selected views. Inference scores may be analyzed to select a background for an image to be included in a training dataset. Backgrounds may be selected and training images may be added to a training dataset iteratively during training (e.g., between epochs). Additionally, early or late fusion nay be employed that uses object mask data to improve inferencing performed by a neural network trained using object mask data. 1. A method comprising:identifying, in a first image, a region that corresponds to an object having a first background in the first image;determining image data representative of the object based at least on the region of the object;generating a second image that includes the object having a second background based at least on integrating, using the image data, the object with the second background; andtraining at least one neural network to perform a predictive task using the second image.2. The method of claim 1 , wherein the identifying of the region comprises determining at least a first segment of the first image corresponds to the object and at least a second segment of the first image corresponds to the first background based at least on performing image segmentation on the first image.3. The method of claim 1 , wherein the training of the at least one neural network is to classify one or more poses of the objects.4. The method of claim 1 , wherein the determining the image data comprises generating a mask based at least on the region in the ...

Подробнее
02-04-2015 дата публикации

Posture detection method and system

Номер: US20150092998A1
Автор: Jinjun Liu, Wei Li
Принадлежит: Neusoft Medical Systems Co Ltd

A posture detection method and system are provided. The posture detection method includes: obtaining skeleton data of a target person; analyzing the skeleton data to obtain actual posture information of the target person; and recording the actual posture information of the target person. The accuracy of determining the posture information by obtaining the skeleton data of a body is quite high. Besides, the actual posture information is recorded automatically, so that a doctor does not need to record the posture information manually. Therefore, scanning missing or wrong scanning direction caused by inconformity between the real posture of the patient and the recorded posture information may be avoided, which ensures the reliability of medical diagnosis.

Подробнее
25-03-2021 дата публикации

Touchscreen with Three-Handed Gestures System and Method

Номер: US20210089203A1
Автор: Buettner Jonathan R.
Принадлежит: BBY SOLUTIONS, INC.

A user interface verification device and a method of use is presented for recognizing a three-hand gesture on a touchscreen of the device. The gesture is recognized by detecting a plurality of contact points in at least two, disparate touching zones, and simultaneously detecting additional contact points in a third, disparate touching zone. In one embodiment, the device displays content in a review mode that can be reviewed in a normal manner. In the execution mode, the device requires a signature, a touch, or another sign of acceptance involving a touch within the third touching zone. The device can ensure that a customer directly provides consent by requiring that the two, multi-point touching zones are tactilely engaged by both hands of a presenter while in the execution mode. 1. A method of using a device having a touchscreen to ensure input from two parties for the purpose of assuring proper consent to a document comprising:a) on the device, displaying the document on the touchscreen;b) on the device, detecting a plurality of points of contact on the touchscreen at each of two different touch zones located distant from each other on the touchscreen to verify the device is being properly held, wherein neither touch zone is pre-defined to any particular location on the touchscreen but is identified after detecting and analyzing the plurality of points of contact;c) on the device, after detecting the plurality of points of contact at the two different touch zones, displaying a mechanism for receiving consent at a third touch zone;d) on the device, only while still detecting the plurality of points of contact at the two different touch zones, recognizing consent by detecting an input of consent at the third touch zone; ande) recording the consent in a data memory.2. The method of claim 1 , wherein the two different touch zones are located proximal to opposites sides of the touchscreen.3. The method of claim 2 , wherein the third touch zone is located proximal to a ...

Подробнее
31-03-2022 дата публикации

Touchless photo capture in response to detected hand gestures

Номер: US20220103748A1
Автор: Ilteris CANBERK
Принадлежит: Individual

Example systems, devices, media, and methods are described for capturing still images in response to hand gestures detected by an eyewear device that is capturing frames of video data with its camera system. A localization system determines the eyewear location relative to the physical environment. An image processing system detects a hand shape in the video data and determines whether the detected hand shape matches a border gesture or a shutter gesture. In response to a border gesture, the system establishes a border that defines the still image to be captured. In response to a shutter gesture, the system captures a still image from the frames of video data. The system determines a shutter gesture location relative to the physical environment. The captured still image is presented on the display at or near the shutter gesture location, such that the still image appears anchored relative to the physical environment. The captured still image is viewable by other devices that are using the image capture system.

Подробнее
29-03-2018 дата публикации

Method and system for gesture-based interactions

Номер: US20180088663A1
Автор: Lei Zhang, Wuping Du
Принадлежит: Alibaba Group Holding Ltd

Gesture based interaction is presented, including determining, based on an application scenario, a virtual object associated with a gesture under the application scenario, the gesture being performed by a user and detected by a virtual reality (VR) system, outputting the virtual object to be displayed, and in response to the gesture, subjecting the virtual object to an operation associated with the gesture.

Подробнее
19-06-2014 дата публикации

CONTROL APPARATUS, VEHICLE, AND PORTABLE TERMINAL

Номер: US20140172231A1
Принадлежит: Clarion Co., Ltd.

A control apparatus to be connected to a route guidance apparatus, comprising: a hand information detection part for detecting information on a hand of a user from a taken image; and an operation command generation part for outputting a control command to at least one of the route guidance apparatus and a plurality of apparatus connected to the route guidance apparatus, the operation command generation part being configured to output the control command to the at least one of the route guidance apparatus and the plurality of apparatus based on a direction of the hand of the user detected by the hand information detection part. 1. A control apparatus to be connected to a route guidance apparatus , comprising:a hand information detection part for detecting information on a hand of a user from a taken image; andan operation command generation part for outputting a control command to at least one of the route guidance apparatus and a plurality of apparatus connected to the route guidance apparatus,the operation command generation part being configured to output the control command to the at least one of the route guidance apparatus and the plurality of apparatus based on a direction of the hand of the user detected by the hand information detection part.2. The control apparatus according to claim 1 , wherein: a hand detection part for detecting an area of the hand of the user;', 'a shape recognition part for recognizing a shape of the hand of the user based on the detected area of the hand;', 'a direction recognition part for recognizing the direction of the hand of the user based on the detected area of the hand; and', 'a motion recognition part for recognizing a motion of the hand of the user based on the area of the hand and the direction of the hand; and, 'the hand information detection part includesthe operation command generation part outputs the control command also based on the motion of the hand recognized by the motion recognition part.3. The control apparatus ...

Подробнее
01-04-2021 дата публикации

Method and device for detecting hand gesture key points

Номер: US20210097270A1

A method for detecting gesture key points can include: acquiring a target image to be detected; determining a gesture category according to the target image, the gesture category being a category of a gesture contained in the target image; determining a target key point detection model corresponding to the gesture category from a plurality of key point detection models; and performing a key point detection on the target image by the target key point detection model.

Подробнее
05-04-2018 дата публикации

Motion training device, program, and display method

Номер: US20180096615A1
Принадлежит: Seiko Epson Corp

A motion training device includes: a display on which a user can visually recognize a training target site as a part of a body part of the user in the state where motion images is displayed; and a controller which causes the display to display motion images that appears as if the training target site is moving, at a different position from a position of the training target site of the user.

Подробнее
28-03-2019 дата публикации

METHODS AND SYSTEMS FOR CREATING VIRTUAL AND AUGMENTED REALITY

Номер: US20190094981A1
Принадлежит: Magic Leap, Inc.

Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. The system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points. 1. A system for generating a three-dimensional (3D) model of a face of a user , the system comprising:a head-mounted display (HMD) configured to present virtual content to a user;an inward-facing imaging system comprising at least one eye camera, wherein the inward-facing imaging system is configured to image at least a portion of the face of the user while the user is wearing the HMD;an inertial measurement unit (IMU) associated with the HMD and configured to detect movements of the HMD; and detect a trigger to initiate imaging of a face of the user, wherein the trigger comprises a movement detected by the IMU involving putting the HMD onto a head of the user or taking the HMD off of the head of the user;', 'activate, in response to detecting the trigger, the at least one eye camera to acquire images;, 'a hardware processor programmed todetect a stopping condition for stopping the imaging based on data acquired from at least one of the IMU or the inward-facing imaging system;analyze the images acquired by the at least one eye camera with a stereo vision algorithm; andfuse the images to generate a face model of the user's face based at least partly on an output of the stereo vision algorithm.2. The system of claim 1 , wherein to detect the trigger claim 1 , the hardware processor is programmed to:determine an acceleration of the HMD;compare the acceleration of the HMD with a threshold acceleration ...

Подробнее
06-04-2017 дата публикации

METHOD AND SYSTEM FOR HUMAN-TO-COMPUTER GESTURE BASED SIMULTANEOUS INTERACTIONS USING SINGULAR POINTS OF INTEREST ON A HAND

Номер: US20170097687A1
Принадлежит:

Described herein is a method for enabling human-to-computer three-dimensional hand gesture-based natural interactions from depth images provided by a range finding imaging system. The method enables recognition of simultaneous gestures from detection, tracking and analysis of singular points of interests on a single hand of a user and provides contextual feedback information to the user. The singular points of interest of the hand: include hand tip(s), fingertip(s), palm centre and centre of mass of the hand, and are used for defining at least one representation of a pointer. The point(s) of interest is/are tracked over time and are analysed to enable the determination of sequential and/or simultaneous “pointing” and “activation” gestures performed by a single hand. 1. A method for providing natural human-to-computer interaction based on a three-dimensional hand gesture recognition system , the method comprising the steps of:a) imaging a scene including at least one hand of at least one user;b) processing the imaged scene to determine at least two points of interest associated with said at least one hand;c) tracking said at least two points of interest to provide a tracked movement of each point of interest with respect to time;d) analysing said tracked movement of each point of interest;e) determining, from the analysis of said tracked movement, the simultaneous performance of an activation gesture based on two points of interest, and a pointing gesture based on a single point of interest; andf) using said determined performance of said activation gesture and said pointing gesture for human-to-computer interaction.2. A method according to claim 1 , wherein any of said points of interest comprise one of: a finger tip; a hand tip; a palm centre: a centre of mass of the hand; and a derivative of a combination of at least two of: the finger tip claim 1 , the hand tip claim 1 , the palm centre and the centre of mass of the hand.3. A method according to claim 2 , further ...

Подробнее
28-03-2019 дата публикации

Wearable Electronic Device Having a Light Field Camera Usable to Perform Bioauthentication from a Dorsal Side of a Forearm Near a Wrist

Номер: US20190095602A1
Принадлежит: Apple Inc

A method of authenticating a user of a wearable electronic device includes emitting light into a dorsal side of a forearm near a wrist of the user; receiving, using a light field camera, remissions of the light from the dorsal side of the forearm near the wrist of the user; generating a light field image from the remissions of the light; performing a synthetic focusing operation on the light field image to construct at least one image of at least one layer of the forearm near the wrist; extracting a set of features from the at least one image; determining whether the set of features matches a reference set of features; and authenticating the user based on the matching. In some embodiments, the method may further include compensating for a tilt of the light field camera prior to or while performing the synthetic focusing operation.

Подробнее
28-03-2019 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20190095675A1
Принадлежит: FUJITSU LIMITED

An image processing apparatus includes a memory and a processor configured to acquire an image in which a subject is captured by a camera, calculate a plurality of spatial frequency characteristics on the basis of each of a plurality of regions included in the image, and perform determination of a tilt of the subject with respect to the camera in accordance with the plurality of spatial frequency characteristics. 1. An image processing apparatus comprising:a memory; and calculate a plurality of spatial frequency characteristics on the basis of each of a plurality of regions included in the image, and', 'perform determination of a tilt of the subject with respect to the camera in accordance with the plurality of spatial frequency characteristics., 'a processor coupled to the memory and the processor configured to acquire an image in which a subject is captured by a camera,'}2. The image processing apparatus according to claim 1 , whereinthe processor is further configured to, when it is determined that the determined tilt of the subject is appropriate, compare the image with a registered image stored in the memory to determine whether the subject of the image matches a subject of the registered image.3. The image processing apparatus according to claim 1 , whereinthe subject is a palm.4. The image processing apparatus according to claim 1 , whereinthe plurality of regions includes at least one of a right-side region of the image, a left-side region of the image, an upper-side region of the image, or a lower-side region of the image.5. The image processing apparatus according to claim 1 , whereinthe determination includescomparing the plurality of spatial frequency characteristics with each other, andwhen difference regarding components of spatial frequencies in a specific range included in the plurality of spatial frequency characteristics is no less than a threshold, determining that a first part of the subject corresponding to a first region of the image is closer ...

Подробнее
12-04-2018 дата публикации

IMAGE PROCESSOR, DETECTION APPARATUS, LEARNING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM STORAGE MEDIUM

Номер: US20180101741A1
Автор: ARAI YUKO, Arata Koji
Принадлежит:

An image processor includes an image converter. The image converter transforms data of an image that is photographed with a camera for photographing a seat, based on a transformation parameter that is calculated in accordance with a camera-position at which the camera is disposed. The image converter outputs the thus-transformed data of the image. The transformation parameter is a parameter for transforming the data of the image such that an appearance of the seat depicted in the image is approximated to a predetermined appearance of the seat. 1. An image processor comprising an image converter configured to transform data of an image photographed with a camera for photographing a seat , based on a transformation parameter which is calculated in accordance with a camera-position at which the camera is disposed , the image converter further configured to output the transformed data of the image ,wherein the transformation parameter is a parameter for transforming the data of the image such that an appearance of the seat depicted in the image is approximated to a predetermined appearance of the seat.2. The image processor according to claim 1 , further comprising a transformation-parameter memory configured to store the transformation parameter claim 1 ,wherein the image converter acquires the transformation parameter from the transformation-parameter memory.3. The image processor according to claim 1 , further comprising a transformation-parameter receiver configured to acquire the transformation parameter from an outside claim 1 ,wherein the image converter acquires the transformation parameter from the transformation-parameter receiver.4. The image processor according to claim 1 , wherein the transformation parameter is a parameter for transforming the data of the image such that a coordinate of a predetermined point on the seat depicted in the image matches a predetermined coordinate.5. The image processor according to claim 1 , wherein the transformation ...

Подробнее
26-03-2020 дата публикации

ADVANCED FINGER BIOMETRIC PURCHASING

Номер: US20200097976A1
Автор: Hause Colin Nickolas
Принадлежит:

A payment terminal such as is used for scanning credit cards and debit cards but which is instead/also capable of deep finger scans, NOT limited to superficial finger scans (such as finger prints), for advanced multi-factor purchases with a single financial instrument: the finger. Advanced deep finger scans can detect several factors: finger print, pulse rate, vein structure and bone structure. Thus at the checkout stand, a customer will be presented with an advanced finger scanner which may superficially look like a finger print scanner but which in fact is capable of determining all of the values of the items above. The advanced finger scanner may access various databases and thence the financial settlement (banking) system to cause payment to be processed. 1. An improved point-of-sale station , having a payment terminal , a first operative connection from the payment terminal to the point-of-sale station , and further having a second operative connection to a financial payment processing network , for use by customers having fingers having therein finger bone structure , veins , a pulse , and finger prints , wherein the improvement comprises:an advanced finger scanner, the advanced finger scanner operative to scan such finger bone structure, encrypt the scan, and transmit the encrypted scan via the second operative connection to such financial payment processing network.2. An improved point-of-sale station , having a payment terminal , a first operative connection from the payment terminal to the point-of-sale station , and further having a second operative connection to a financial payment processing network , for use by customers having fingers having therein finger bone structure , veins , a pulse , and finger prints , wherein the improvement comprises:an advanced finger scanner, the advanced finger scanner operative to scan such finger veins, encrypt the finger vein scan, and transmit the encrypted finger vein scan via the second operative connection to such ...

Подробнее
04-04-2019 дата публикации

VEIN SCANNING DEVICE FOR AUTOMATIC GESTURE AND FINGER RECOGNITION

Номер: US20190101991A1
Автор: Brennan Michael R.
Принадлежит:

This relates to a device capable of automatically determining a user's gesture and/or finger positions based on one or more properties of the user's veins and methods for operation thereof. The device can include one or more sensors (e.g., a camera) to capture one or more images of the user's hand. The device can convert the image(s) to digital representations and can correlate the digital image(s) of the veins to one or more poses. From the pose(s), the device can determine the user's hand movements, and one or more gestures and/or finger positions can be determined from the hand movements. The device can interpret the gestures and/or finger positions as one or more input commands, and the device can perform an operation based on the input command(s). Examples of the disclosure include using the user input commands in virtual reality applications. 1. A method for determining hand gestures by an electronic device , the method comprising:capturing one or more first images of one or more veins in a hand at a first time;capturing one or more second images of the one or more veins in the hand at a second time, different from the first time;determining a first hand pose based on the one or more first images;determining a second hand pose based on the one or more second images; anddetermining a gesture based on at least the first hand pose and the second hand pose.2. The method of claim 1 ,wherein the determination of the first hand pose includes correlating the one or more veins in the one or more first images to one or more joints of the hand,wherein the determination of the second hand pose includes correlating the one or more veins in the one or more second images to one or more joints of the hand, and 'detecting one or more differences in properties of the one or more veins in the first image and the second image to determine one or more hand movements, wherein the gesture is further based on the one or more hand movements.', 'wherein the determination of a gesture ...

Подробнее
04-04-2019 дата публикации

RENDERING OF VIRTUAL HAND POSE BASED ON DETECTED HAND INPUT

Номер: US20190102927A1
Автор: Yokokawa Yutaka
Принадлежит:

In some implementations, a method is provided, including the following operations: receiving, from a controller device, controller input that identifies a pose of a user's hand; determining a degree of similarity of the controller input to a predefined target input; rendering in a virtual space a virtual hand that corresponds to the controller device, wherein when the degree of similarity exceeds a predefined threshold, then the virtual hand is rendered so that a pose of the virtual hand conforms to a predefined hand pose, and wherein when the degree of similarity does not exceed the predefined threshold, then the virtual hand is rendered so that the pose of the virtual hand dynamically changes in response to changes in the controller input. 1. A method , comprising:receiving, from a controller device, controller input that identifies a pose of a user's hand, including identifying postures of a plurality of fingers of the user's hand;determining a degree of similarity of the controller input to a predefined target input, wherein determining the degree of similarity includes determining deviations of values of the controller input, that identify the postures of the plurality of fingers, from corresponding values of the predefined target input;rendering in a virtual space a virtual hand that corresponds to the controller device,wherein when the degree of similarity exceeds a predefined threshold, then the virtual hand is rendered so that a pose of the virtual hand conforms to a predefined hand pose, such that postures of fingers of the virtual hand corresponding to the plurality of fingers of the user's hand are adjusted to conform to predefined finger postures, andwherein when the degree of similarity does not exceed the predefined threshold, then the virtual hand is rendered so that the pose of the virtual hand dynamically changes in response to changes in the controller input.2. The method of claim 1 , wherein the pose of the virtual hand is defined by postures of ...

Подробнее
02-06-2022 дата публикации

METHODS AND APPARATUSES FOR RECOGNIZING GESTURE, ELECTRONIC DEVICES AND STORAGE MEDIA

Номер: US20220171962A1
Автор: ZU Chunshan
Принадлежит:

Provided are a method and an apparatus for recognizing a gesture, an electronic device and a storage medium. In one or more embodiments, the method includes: detecting at least one hand region from a video image and obtaining hand image information of each of the at least one hand region; obtaining hand motion information of each of the at least one hand region by tracking the at least one hand region; determining a gesture corresponding to each of the at least one hand region according to the hand image information and/or the hand motion information of each of the at least one hand region; wherein the gesture comprises at least one of a single-hand static gesture, a single-hand dynamic gesture, a double-hand static gesture or a double-hand dynamic gesture. 1. A method of recognizing a gesture , comprising:detecting at least one hand region from a video image and obtaining hand image information of each of the at least one hand region;obtaining hand motion information of each of the at least one hand region by tracking the at least one hand region;determining a gesture corresponding to each of the at least one hand region according to the hand image information and/or the hand motion information of each of the at least one hand region; wherein the gesture comprises at least one of a single-hand static gesture, a single-hand dynamic gesture, a double-hand static gesture or a double-hand dynamic gesture.2. The method according to claim 1 , wherein 'determining that there is only one hand region by detecting the video image; and', 'detecting the at least one hand region from the video image comprises obtaining a first recognition result by performing a single-hand static gesture recognition for the hand image information of the hand region; and', 'in response to that the first recognition result is yes, determining that the gesture corresponding to the hand region is the single-hand static gesture., 'determining the gesture corresponding to each of the at least one ...

Подробнее
29-04-2021 дата публикации

METHOD FOR AUTOMATICALLY GENERATING HAND MARKING DATA AND CALCULATING BONE LENGTH

Номер: US20210124917A1
Автор: SUN Tianyuan, Wang Bo
Принадлежит:

The present disclosure relates to a method for automatically generating labeled data of a hand, comprising: acquiring at least three images to be processed of the hand under different angles of view; detecting key points on the at least three images to be processed respectively; screening the detected key points by using an association relation among the at least three images to be processed, the association relation being the same frame of image of the at least three images to be processed from the hand under different angles of view; reconstructing a three-dimensional space representation of the hand with regard to the key points screened on the same frame of image, in combination with a given finger bone length; projecting the key points on the three-dimensional representation of the hand onto the at least three images to be processed; and generating the labeled data of the hand on the images to be processed by using the projected key points on the at least three images to be processed. 1. A method for automatically generating labeled data of a hand , comprising:acquiring at least three images to be processed of the hand under different angles of view;detecting key points on the at least three images to be processed respectively;screening the detected key points by using an association relation among the at least three images to be processed, the association relation being that the at least three images to be processed are from the same frame of image of the hand under different angles of view;reconstructing a three-dimensional space representation of the hand with regard to the key points screened on the same frame of image, in combination with a given finger bone length;projecting the key points on the three-dimensional representation of the hand onto the at least three images to be processed; andgenerating the labeled data of the hand on the images to be processed by using the projected key points on the at least three images to be processed.2. The method ...

Подробнее
30-04-2015 дата публикации

SYTSTEM FOR MULTIPLE ALGORITHM PROCESSING OF BIOMETRIC DATA

Номер: US20150117724A1
Принадлежит: FUSIONARC , INC.

A system performs processing of biometric information to create multiple templates. This allows biometric systems to be flexible and interact with a plurality of vendors' technologies. Specifically, a biometric sample is captured from a sensor and transmitted to a processing component. The biometric sample is then processed by a first algorithm to yield a biometric template and the template is stored and associated with a record identifier. The biometric sample is also processed by a second algorithm to yield a second template. The second template is stored and associated with the record identifier. 1. A biometric information processing system for identifying or verifying a subject , the system comprising:a processing component for processing at least one biometric sample by a plurality of template generation algorithms to yield a corresponding plurality of reference templates;a storage component for storing the plurality of reference templates in association with a record identifier identifying the subject;a scanning device for capturing a first biometric sample of the subject; anda transmitter for transmitting the first biometric sample from the scanning device to the processing component; the processing component processes the first biometric sample by a first template generation algorithm to yield a first reference template, and', 'the processing component processes the first biometric sample by a second template generation algorithm to yield a second reference template., 'wherein'}2. The biometric information processing system according to claim 1 , further comprisinga second scanning device for receiving a second biometric sample from the subject,wherein the processing component processes the second biometric sample by the first template generation algorithm or the second template generation algorithm to generate a match template, and performs a comparison between the match template and one of the reference templates to determine a degree of similarity based ...

Подробнее
02-04-2020 дата публикации

MARKER FOR OCCLUDING FOREIGN MATTER IN ACQUIRED IMAGE, METHOD FOR RECOGNIZING FOREIGN MATTER MARKER IN IMAGE AND BOOK SCANNING METHOD

Номер: US20200104621A1
Автор: ZHOU Kang
Принадлежит:

A marker for occluding a foreign matter in an acquired image contains a mark part whose surface is provided with a two-side continuous pattern formed by combining at least one or multiple primitives; and a fixing part which fixes a marker to a surface of foreign matter in the acquired image with the mark part to facilitate algorithm recognition and marking. The method for recognizing a foreign matter marker in an image includes the steps of performing edge detection on a planar image to acquire an edge map in the planar image; and extracting all contours in the edge map. A certain number of alternative straight-line segments are determined using an algorithm; a region is determined according to the position of each alternative straight-line segment; and finally the region of the approximate area above or below the marker is used. 1. A marker for occluding foreign matter in an acquired image , comprising:a mark part whose surface is provided with a two-side continuous pattern formed by combining at least one or multiple primitives; anda fixing part which fixes a marker to a surface of foreign matter in an acquired target to make the surface of the foreign matter in the acquired image covered by the mark part.2. The marker for occluding a foreign matter in an acquired image according to claim 1 , wherein the primitive comprises equal-length straight-line segments and ellipses claim 1 , and the equal-length straight-line segments are parallel to each other.3. The marker for occluding a foreign matter in an acquired image according to claim 1 , wherein the two-side continuous pattern is concentrated in a rectangular recognition region located in the middle of the mark part claim 1 , each equal-length line segment is perpendicular to a long side of the rectangular region claim 1 , and a connecting line of focal points of the ellipses or center points of focal lengths of the ellipses is parallel to the long side of the rectangular region; and a color of the rectangular ...

Подробнее
02-04-2020 дата публикации

DEVICE CONTROL APPARATUS

Номер: US20200104629A1
Автор: HIROKI Daisuke
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

A device control apparatus includes an imaging unit configured to capture an image of an occupant in a vehicle, a first recognition unit configured to recognize a posture of the occupant based on the image captured by the imaging unit, a second recognition unit configured to recognize a state of a hand including at least a shape of the hand of the occupant based on the image captured by the imaging unit, a discrimination processing unit configured to specify a device to be controlled and an operation to be executed based on the posture of the occupant recognized by the first recognition unit and the state of the hand recognized by the second recognition unit, and a controller configured to issue a control command corresponding to the specified device to be controlled and the specified operation to be executed. 1. A device control apparatus for controlling a device mounted in a vehicle , the device control apparatus comprising:an imaging unit configured to capture an image of an occupant in the vehicle;a first recognition unit configured to recognize a posture of the occupant based on the image captured by the imaging unit;a second recognition unit configured to recognize a state of a hand including at least a shape of the hand of the occupant based on the image captured by the imaging unit;a discrimination processing unit configured to specify a device to be controlled and an operation to be executed based on the posture of the occupant recognized by the first recognition unit and the state of the hand recognized by the second recognition unit; anda controller configured to issue a control command corresponding to the specified device to be controlled and the specified operation to be executed.2. The device control apparatus according to claim 1 , wherein the discrimination processing unit is configured to specify the device to be controlled and the operation to be executed claim 1 , by referring to discrimination processing information in which information ...

Подробнее
11-04-2019 дата публикации

Systems and Methods of Object Shape and Position Determination in Three-Dimensional (3D) Space

Номер: US20190108676A1
Автор: Holz David
Принадлежит: Leap Motion, Inc.

Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. 1. A smart phone having an interface that identifies a position and a shape of a portion of a human hand moving in a three-dimensional (3D) space , the smart phone comprising: analyzing two or more images captured by a camera from a particular vantage point to computationally represent a portion of an object as one or more mathematically represented 3D surfaces, each 3D surface corresponding to a cross-section of the portion of the object, based at least in part on a plurality of edge points of the portion of the object in the image, tangent lines extending from the camera to at least two edge points of the plurality of edge points, and a centerline corresponding to the tangent lines; and', 'reconstructing the position of, and the shape fitting, at least the portion of the object in the 3D space based at least in part on the plurality of edge points and the centerline., 'a fixed function logic circuit storing instructions that, when executed, implement actions including2. The smart phone of claim 1 , further including: at least one source that casts an output onto the portion of the object.3. The smart phone of claim 1 , further including transmitting to at least one further process claim 1 , a signal that includes at least one selected from (i) trajectory information determined from the reconstructed position of claim 1 , and the shape fitting claim 1 , the at least a portion of the object that the at least one further process interprets claim 1 , and (ii) gesture information interpreted from trajectory information for the portion of the object by the smart phone.4. The smart phone of claim 1 , further comprising a time-of-flight camera claim 1 , and wherein a plurality of points on at least one surface of ...

Подробнее
26-04-2018 дата публикации

Drone piloted in a spherical coordinate system by a gestural with multi-segment members, control method and associated computer program

Номер: US20180114058A1
Автор: Kahn Arthur
Принадлежит:

An electronic device for piloting a drone comprises an acquisition module to acquire a series of images of a scene including a user, taken by an image sensor equipping the drone, an electronic detection module for detecting, in the series of acquired images, a gesture by the user, and a control module for controlling a movement of the drone based on the detected gesture. The detection module is configured to detect a gesture with at least two separate limb segments of the user, and the electronic control module is configured to control the movement of the drone in a spherical coordinate system associated with the user, by calculating piloting instructions in the spherical coordinate system based on the detected gesture with several limb segments. 1. An electronic device for piloting a drone , the device comprising:an acquisition module configured to acquire a series of images of a scene including a user, taken by an image sensor equipping the drone,an electronic detection module configured to detect, in the series of acquired images, a gesture by the user, the user having limbs, each limb including one or several segments,an electronic control module configured to command a movement of the drone based on the detected gesture, the electronic control module being configured to calculate piloting instructions corresponding to said movement,wherein the electronic detection module is configured to detect a gesture with at least two separate limb segments, and the electronic control module is configured to control the movement of the drone in a spherical coordinate system associated with the user, the piloting instructions being calculated in said spherical coordinate system based on the detected gesture with several limb segments.2. The electronic device according to claim 1 , wherein the gesture detected to control the movement of the drone is a gesture with at least two segments of the two upper limbs of the user.3. The electronic device according to claim 1 , wherein ...

Подробнее
27-04-2017 дата публикации

INFORMATION DISPLAY METHOD AND INFORMATION DISPLAY TERMINAL

Номер: US20170116479A1
Принадлежит: Hitachi Maxwell, Ltd.

An information display terminal includes an arithmetic unit performing a process that selects a predetermined type of communication unit among a plurality of types of communication units on the basis of, for example, information related to first objects recognized from a captured image, which is stored in advance, and acquires the related information from a predetermined device on the Internet through the communication unit and a process that, when the presence of a second object or a predetermined positional relationship between the first objects and the second object is specified in the captured image, displays the related information of the first objects in a viewing direction, using a display unit. 1. An information display method that is performed in an information display terminal including an imaging unit that captures an image in a viewing direction of a wearer , a display unit that displays information in the viewing direction , and a plurality of types of communication units that communicate with a network or another terminal which can access the network , the method comprising:an information acquisition process that recognizes a first object in the image captured by the imaging unit according to a predetermined criterion, selects a predetermined type of communication unit among the plurality of types of communication units on the basis of at least one of information related to the first object which is stored in advance and predetermined detection information which is obtained from a predetermined element, accesses the network or another terminal through the communication unit, and acquires the related information of the first object from the network on the basis of identification information recognized from the first object; andan information display process that, when the presence of a second object or a predetermined positional relationship between the first object and the second object is determined or specified in the image captured by the imaging ...

Подробнее
05-05-2016 дата публикации

IMAGE IDENTIFICATION METHOD AND IMAGE IDENTIFICATION DEVICE

Номер: US20160125236A1
Принадлежит:

An image identification method and an image identification device are provided. The method comprises acquiring a hand feature region within a sight from a first view by skin color detection; acquiring a feature and a position of a tip of a finger from the hand feature region by performing a pattern recognition for a morphological feature of a stretched hand; recording an interframe displacement of a feature point of the tip of the finger when the tip of the finger delimits a periphery of a target object to obtain a delimiting trajectory from the interframe displacement, closing the delimiting trajectory to form a full-perimeter geometry; projecting the full-perimeter geometry on a plane where a direction of the sight is perpendicular to a plane where the target object is located to obtain a projection region, performing an image identification using the projection region as an identification region of the target object. 1. A computer-implemented image identification method , comprising:acquiring, at one or more computing devices, a hand feature region within a sight from a first view by a skin color detection, and capturing and tracking the hand feature region in real time;acquiring, at the one or more computing devices, a feature and a position of a tip of a finger from the hand feature region by performing a pattern recognition for a morphological feature of a stretched hand, and capturing and tracking the feature and the position of the tip of the finger in real time in the first view;recording, using the one or more computing devices, an interframe displacement of a feature point of the tip of the finger when the tip of the finger delimits a periphery of a target object located in the first view to obtain a delimiting trajectory from the interframe displacement, and closing the delimiting trajectory to form a full-perimeter geometry;projecting, using the one or more computing devices, the full-perimeter geometry on a plane where a direction of the sight is ...

Подробнее
03-05-2018 дата публикации

ELECTRICAL DEVICE FOR HAND GESTURES DETECTION

Номер: US20180120950A1
Принадлежит: Microsoft Technology Licensing, LLC

Hand gesture detection electrical device for detecting hand gestures, comprising an IC electronically integrating: 1. A hand gesture detection electrical device for detecting hand gestures , comprising:an integrated circuit (IC) electronically integrating the following:a first interface configured to connect at least one imaging device;a second interface configured to connect to a controlled unit;a data storage configured to store a plurality of sequential logic models each representing one of a plurality of hand gestures, each of said sequential logic models comprises a pre-defined ordered sequence of at least one of a plurality of pre-defined hand poses and pre-defined hand motions, wherein each of said plurality of pre-defined hand poses and each of said plurality of pre-defined hand motions is represented by one of a plurality of pre-defined hand features records;a memory storing a code; andat least one processor configured to be coupled to said first interface, said second interface, said data storage and said memory, the at least one processor is configured to execute said stored code, said code comprising:code instructions to receive at least one of a plurality of timed images depicting a moving hand of a user;code instructions to generate a runtime sequence of at least one of a plurality of runtime hand datasets, said runtime sequence representing said moving hand;code instructions to match each of said at least one of said plurality of runtime hand datasets with a respective one of said plurality of pre-defined hand features records in a respective position in a sequence of each of said plurality of sequential logic models;code instructions to estimate which one of said plurality of hand gestures best matches said runtime sequence; andcode instructions to initiate at least one action to said controlled unit, said at least one action is associated with a selected at least one of said plurality of hand gestures, said selection is based on said estimation; ...

Подробнее
03-05-2018 дата публикации

GESTURE RECOGNITION SYSTEM USING DEPTH PERCEPTIVE SENSORS

Номер: US20180121717A9
Принадлежит:

Acquired three-dimensional positional information is used to identify user created gesture(s), which gesture(s) are classified to determine appropriate input(s) to an associated electronic device or devices. Preferably at at least one instance of a time interval, the posture of a portion of a user is recognized, based at least one factor such as shape, position, orientation, velocity. Posture over each of the instance(s) is recognized as a combined gesture. Because acquired information is three-dimensional, two gestures may occur simultaneously. 1. One or more computer-storage media having computer-executable instructions embodied thereon that when executed by a computing device perform a method of three-dimensional (“3D”) image analysis , the method comprising:receiving 3D image data describing a 3D scene and comprising points having 3D coordinate information;grouping at least some of the points into a plurality of clusters;selecting, according to at least a first parameter, a specific cluster corresponding to a real-world object of interest described by the 3D image data;grouping at least some of the points of the specific cluster into a set according to points' depth positions, wherein the set has a geometric center; andassociating a shape to the set, the shape being fixed to the geometric center of the set.2. The media of claim 1 , wherein the real-world object of interest is a person.3. The media of claim 1 , wherein the real-world object of interest is a person's body part.4. The media of claim 1 , wherein grouping at least some of the points into the plurality of clusters is based upon each point's z-depth value.5. The media of claim 1 , wherein the method further comprises determining the geometric center for the set claim 1 , the center having an assigned depth value that is an average of depth values assigned to points forming the set.6. The media of claim 1 , wherein the 3D image data is generated by a time-of-flight 3D camera.7. One or more computer- ...

Подробнее
03-05-2018 дата публикации

APPARATUS, METHOD FOR OBJECT IDENTIFICATION, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: US20180121745A1
Принадлежит: FUJITSU LIMITED

An apparatus for object identification includes: a memory; and a processor coupled to the memory and configured to execute a determination process that includes determining whether a hand is in contact with an object, execute an identification process that includes identifying a first shape of an area of the object hidden by the hand in accordance with a second shape of the hand when the hand is determined to be in contact with the object in the determination process, and execute a distinguishing process that includes distinguishing the object based on the first shape in the identification process. 1. An apparatus for object identification comprising:a memory; and execute a determination process that includes determining whether a hand is in contact with an object,', 'execute an identification process that includes identifying a first shape of an area of the object hidden by the hand in accordance with a second shape of the hand when the hand is determined to be in contact with the object in the determination process, and', 'execute a distinguishing process that includes distinguishing the object based on the first shape identified by the identification process., 'a processor coupled to the memory and configured to'}2. The apparatus according to claim 1 , whereinthe identification process includes identifying a third shape of an area of a part of the object as the first shape of the area of the object hidden by the hand based on a result of a comparison between a feature amount indicating the third shape of the area of the part of the object and a feature amount indicating the second shape of the hand that is in contact with the object.3. The apparatus according to claim 1 , whereinthe identification processing includes identifying the first shape of the hidden area of the object when the hand is determined to be in contact with the object in the determination processing, and to be in the state of holding the object.4. The apparatus according to claim 1 , whereinthe ...

Подробнее
05-05-2016 дата публикации

MODEL FITTING FROM RAW TIME-OF-FLIGHT IMAGES

Номер: US20160127715A1
Принадлежит:

Model fitting from raw time of flight image data is described, for example, to track position and orientation of a human hand or other entity. In various examples, raw image data depicting the entity is received from a time of flight camera. A 3D model of the entity is accessed and used to render, from the 3D model, simulations of raw time of flight image data depicting the entity in a specified pose/shape. The simulated raw image data and at least part of the received raw image data are compared and on the basis of the comparison, parameters of the entity are computed. 1. A method of tracking parameters of an entity comprising:receiving raw image data depicting the entity from a time of flight camera;accessing a 3D model of the entity, the 3D model having model parameters;rendering, from the 3D model having specified values of the model parameters, simulations of raw time of flight image data depicting the entity;comparing the simulated raw image data and at least part of the received raw image data;calculating, on the basis of the comparison, values of the tracked parameters of the entity.2. The method as claimed in wherein the values of the tracked parameters of the entity are calculated without the need to compute depth from the received raw image data.3. The method as claimed in wherein the tracked parameters comprise any of: pose parameters claim 1 , shape parameters.4. The method as claimed in comprising receiving the raw image data in the form of one or more intensity images associated with modulation frequencies of light emitted by the time of flight camera claim 1 , or associated with exposure periods of the time of flight camera.5. The method as claimed in comprising receiving the raw image data from only one channel.6. The method as claimed in comprising receiving the raw image data on a plurality of channels and aggregating the data across channels before making the comparison.7. The method as claimed in comprising receiving the raw image data on a ...

Подробнее
14-05-2015 дата публикации

VEHICLE RECOGNIZING USER GESTURE AND METHOD FOR CONTROLLING THE SAME

Номер: US20150131857A1
Автор: HAN Jae Sun, Kim Ju Hyun
Принадлежит:

A vehicle is provided that is capable of preventing malfunction or inappropriate operation of the vehicle due to a passenger error by distinguishing a gesture of a driver from that of the passenger when a gesture of a user is recognized, and a method for controlling the same is provided. The vehicle includes an image capturing unit mounted inside the vehicle and configured to capture a gesture image of a gesture area including a gesture of a driver or a passenger. A controller is configured to detect an object of interest in the gesture image captured by the image capturing unit and determine whether the object of interest belongs to the driver. In addition, the controller is configured to recognize a gesture expressed by the object of interest and generate a control signal that corresponds to the gesture when the object of interest belongs to the driver. 1. A vehicle , comprising:an image capturing unit mounted inside the vehicle and configured to capture a gesture image of a gesture area including a driver gesture or a passenger gesture; and detect an object of interest in the gesture image captured by the image capturing unit;', 'determine whether the object of interest belongs to the driver;', 'recognize a gesture expressed by the object of interest; and', 'generate a control signal that corresponds to the gesture when the object of interest belongs to the driver., 'a controller configured to2. The vehicle according to claim 1 , wherein the controller is configured to extract a pattern of interest with respect to the object of interest and determine whether the pattern of interest has a predefined feature.3. The vehicle according to claim 2 , wherein the controller is configured to determine that the object of interest belongs to the driver when the pattern of interest has the predefined feature.4. The vehicle according to claim 3 , wherein the object of interest is an arm or a hand of a person.5. The vehicle according to claim 4 , wherein the pattern of ...

Подробнее
25-04-2019 дата публикации

COARSE-TO-FINE HAND DETECTION METHOD USING DEEP NEURAL NETWORK

Номер: US20190122041A1
Принадлежит:

Embodiments provide a process to identify one or more areas containing a hand or hands of one or more subjects in an image. The detection process can start with coarsely locating one or more segments in the image that contain portions of the hand(s) of the subject(s) in the image using a coarse CNN. The detection process can then combine these segments to obtain the one or more areas capturing the hand(s) of the subject(s) in the image. The combined area(s) can then be fed to a grid-based deep neural network finely detect area(s) in the image that contain only the hand(s) of the subject(s) captured. 1. A method for detecting a hand of a subject in an image , the method being executed by a processor configured to execute machine-readable instructions , the method comprising:receiving image data for an image, the image capturing one or more hands of one or more subjects;processing the image data using a first location network to obtain segments in the image, each of the segments containing the portion of the hand of the subject;combining the segments into a first image area;expanding the size of the first image area by a predetermined margin; andprocessing the first image area using a grid-based detection network to obtain a second image area, the second image area capturing a hand of the subject.2. The method of claim 1 , wherein the first location network includes a convolution neural network (CNN) having two sub stages connected in a series.3. The method of claim 1 , wherein the segments include a first segment and a second segment claim 1 , the first segment containing a first portion of the hand of the subject claim 1 , and the second segment containing a second portion of the at least one hand of the subject claim 1 , wherein the first portion overlaps with the second portion at least in part.4. The method of claim 1 , wherein expanding the size of the first image area by the predetermined margin comprises:dividing the image into n by n grids, wherein the ...

Подробнее
04-05-2017 дата публикации

METHOD AND SYSTEM OF GROUP INTERACTION BY USER STATE DETECTION

Номер: US20170127021A1
Принадлежит: KONICA MINOLTA LABORATORY U.S.A., INC.

A method is disclosed for detecting interaction between two or more participants in a meeting, which includes capturing at least one three-dimensional stream of data on the two or more participants; extracting a time-series of skeletal data from the at least one three-dimensional stream of data on the two or more participants; classifying the time-series of skeletal data for each of the two or more participants based on a plurality of body position classifiers; and calculating an engagement score for each of the two or more participants. In addition, a method is disclosed for improving a group interaction in a meeting, which includes calculating, for each of the two or more participants, an individual engagement state based on attitudes of the participant, wherein the individual engagement state is an engagement state of the participant to the meeting including an engaged state and a disengaged state. 1. A method for detecting interaction between two or more participants in a meeting , the method comprising:capturing at least one three-dimensional (3D) stream of data on the two or more participants;extracting a time-series of skeletal data from the at least one 3D stream of data on the two or more participants;classifying the time-series of skeletal data for each of the two or more participants based on a plurality of body position classifiers;calculating an engagement score for each of the two or more participants based on the classifying of the time-series of skeletal data for each of the two or more participants; andproviding a feedback in accordance with at least one of the engagement scores of the two or more participants.2. The method of claim 1 , comprising:capturing an audio stream of data on the two or more participants; andadding an utterance classifier to the engagement score based on utterance detected on the audio stream of data on the two or more participants.3. The method of claim 1 , comprising:capturing a weight stream of data on the two or more ...

Подробнее
10-05-2018 дата публикации

WORK ASSISTING SYSTEM INCLUDING MACHINE LEARNING UNIT

Номер: US20180126558A1
Автор: OOBA Masafumi
Принадлежит:

A work assisting system includes a sensor unit that detects a position and an orientation of at least one body part of a worker; a supply unit that supplies a part or a tool to the worker; and a cell controller that controls the supply unit, the cell controller including a machine learning unit that constructs a model by learning a work status of the worker on the basis of the detected position and orientation, and a work status determining unit that determines the work status of the worker by using the constructed model. The supply unit selects the part or tool on the basis of the determined work status and changes the position and orientation of the part or tool on the basis of the position and orientation of the at least one body part. 1. A work assisting system comprising:a sensor unit that detects a position and an orientation of at least one body part of a worker;a supply unit that supplies a part or a tool to the worker; and a machine learning unit that constructs a model by learning a work status of the worker on the basis of the detected position and orientation, and', 'a work status determining unit that determines the work status of the worker by using the constructed model,', 'wherein the supply unit selects the part or tool on the basis of the determined work status and changes the position and orientation of the part or tool on the basis of the position and orientation of the at least one body part., 'a cell controller that controls the supply unit, the cell controller comprising2. The work assisting system according to claim 1 , wherein the model constructed in the machine learning unit is shared as a model constructed in another cell controller connected to the cell controller via a network.3. The work assisting system according to claim 1 , wherein the supply unit includes a robot.4. The work assisting system according to claim 1 , wherein the machine learning unit includes a neural network. This application is based on and claims priority to ...

Подробнее
31-07-2014 дата публикации

SYSTEMS AND METHODS FOR INITIALIZING MOTION TRACKING OF HUMAN HANDS

Номер: US20140211991A1
Принадлежит: IMIMTEK, INC.

Systems and methods for initializing motion tracking of human hands are disclosed. One embodiment includes a processor; a reference camera; and memory containing: a hand tracking application; and a plurality of edge feature templates that are rotated and scaled versions of a base template. The hand tracking application configures the processor to: determine whether any pixels in a frame of video are part of a human hand, where a part of a human hand is identified by searching the frame of video data for a grouping of pixels that have image gradient orientations that match the edge features of one of the plurality of edge feature templates; track the motion of the part of the human hand visible in a sequence of frames of video; confirm that the tracked motion corresponds to an initialization gesture; and commence tracking the human hand as part of a gesture based interactive session. 1. A real-time gesture based interactive system , comprising:a processor;a reference camera configured to capture sequences of frames of video data, where each frame of video data comprises intensity information for a plurality of pixels; a hand tracking application; and', 'a set of edge feature templates comprising a plurality of edge feature templates that are rotated and scaled versions of a base template;', obtain a sequence of frames of video data from the reference camera;', 'compare successive frames of video data from the sequence of frames of video data for pixels that are moving;', 'determine whether any of the pixels that changed are part of a human hand visible in the sequence of frames of video data, where a part of a human hand is identified by searching the frame of video data for a grouping of pixels that have image gradient orientations that match the edge features of one of the plurality of edge feature templates; and', 'track the motion of the part of the human hand visible in the sequence of frames of video data;', 'confirm that the tracked motion of the part of the ...

Подробнее
21-05-2015 дата публикации

IMAGE PROCESSOR WITH STATIC POSE RECOGNITION MODULE UTILIZING SEGMENTED REGION OF INTEREST

Номер: US20150139487A1
Принадлежит:

An image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system comprising a static pose recognition module. The static pose recognition module is configured to identify a region of interest in at least one image, to represent the region of interest as a segmented region of interest comprising a union of segment sets from respective ones of a plurality of lines, to estimate features of the segmented region of interest, and to recognize a static pose of the segmented region of interest based on the estimated features. The lines from which the respective segment sets are taken illustratively comprise respective parallel lines configured as one of horizontal lines, vertical lines and rotated lines. A given one of the segments in one of the sets may be represented by a pair of segment coordinates. 1. A method comprising steps of:identifying a region of interest in at least one image;representing the region of interest as a segmented region of interest comprising a union of segment sets from respective ones of a plurality of lines;estimating features of the segmented region of interest; andrecognizing a static pose of the segmented region of interest based on the estimated features;wherein the steps are implemented in an image processor comprising a processor coupled to a memory.2. The method of wherein the steps are implemented in a static pose recognition module of a gesture recognition system of the image processor.3. The method of wherein the segmented region of interest is generated in conjunction with a scanning operation.4. The method of wherein a given one of the segments in one of the sets corresponding to a particular one of the lines is represented by a pair of segment coordinates comprising a begin coordinate and an end coordinate.5. The method of wherein the plurality of lines from which the respective segment sets are taken ...

Подробнее
28-05-2015 дата публикации

GESTURE RECOGNITION METHOD AND APPARATUS UTILIZING ASYNCHRONOUS MULTITHREADED PROCESSING

Номер: US20150146920A1
Принадлежит:

An image processing system comprises an image processor configured to establish a main processing thread and a parallel processing thread for respective portions of a multithreaded gesture recognition process. The parallel processing thread is configured to utilize buffer circuitry of the image processor, such as one or more double buffers of the buffer circuitry, so as to permit the parallel processing thread to run asynchronously to the main processing thread. The parallel processing thread implements one of noise estimation, background estimation and static hand pose recognition for the multithreaded gesture recognition process. Additional processing threads may be established to run in parallel with the main processing thread. For example, the image processor may establish a first parallel processing thread implementing the noise estimation, a second parallel processing thread implementing the background estimation, and a third parallel processing thread implementing the static hand pose recognition. 1. A method comprising:establishing a main processing thread and a parallel processing thread for respective portions of a multithreaded gesture recognition process in an image processor; andconfiguring the parallel processing thread to utilize buffer circuitry of the image processor so as to permit the parallel processing thread to run asynchronously to the main processing thread;wherein the parallel processing thread implements one of noise estimation, background estimation and static hand pose recognition for the multithreaded gesture recognition process.2. The method of wherein the main processing thread runs in synchronization with a frame rate of an input image stream and the parallel processing thread does not run in synchronization with the frame rate of the input image stream.3. The method of wherein the parallel processing thread runs at a rate that is less than the frame rate of the input image stream.4. The method of wherein establishing a parallel ...

Подробнее
30-04-2020 дата публикации

Information processing apparatus, information processing method, and storage medium

Номер: US20200133388A1
Автор: Kazuki Takemoto
Принадлежит: Canon Inc

An information processing apparatus supplies, an image display apparatus including an image capturing unit configured to capture an image of a real space, and a display unit configured to display an image generated using the image captured by the image capturing unit, an image generated using the image captured by the image capturing unit. The information processing apparatus includes a generation unit configured to generate an image depicting a specific object at a position at which the specific object is estimated to be present after a predetermined time from a time when the image display apparatus starts to move in the captured image of the real space including the specific object, and a control unit configured to shift a position at which the image generated by the generation unit is displayed on the display unit based on a change in a position and/or an orientation of the image display apparatus.

Подробнее
09-05-2019 дата публикации

USER AUTHENTICATION SYSTEMS AND METHODS

Номер: US20190138708A1
Принадлежит:

Data processing systems and methods for authenticating users and for generating user authentication indications is disclosed. In one embodiment, a data processing system for authenticating a user, comprises: a computer processor and a data storage device, the data storage device storing instructions operative by the processor to: receive a user indication identifying a user; receive an authentication indication for the user, the authentication indication comprising a sequence of word-gesture pair indications, each word-gesture pair indication comprising a word indication and a gesture indication; look up a stored authentication indication for the user; compare the received authentication indication with the stored authentication indication; and generate an authentication result indication indicating the result of the comparison. 1. A data processing system for authenticating a user , the data processing system comprising: receive a user indication identifying a user;', 'receive an authentication indication for the user, the authentication indication comprising a sequence of word-gesture pair indications, each word-gesture pair indication comprising a word indication and a gesture indication;', 'look up a stored authentication indication for the user;', 'compare the received authentication indication with the stored authentication indication; and', 'generate an authentication result indication indicating the result of the comparison., 'a computer processor and a data storage device, the data storage device storing instructions operative by the processor to2. The data processing system according to claim 1 , wherein the gesture indications comprise images of the user or a part of the user or a hand of the user.3. The data processing system according to claim 1 , wherein the sequence of word-gesture pair indications comprises a first-word gesture pair indication and a second word-gesture pair indication claim 1 , the first word-gesture pair indication comprising a ...

Подробнее
09-05-2019 дата публикации

Display Control System And Recording Medium

Номер: US20190138859A1
Принадлежит:

There is provided a display control system including a plurality of display units, an imaging unit configured to capture a subject, a predictor configured to predict an action of the subject according to a captured image captured by the imaging unit, a guide image generator configured to generate a guide image that guides the subject according to a prediction result from the predictor, and a display controller configured to, on the basis of the prediction result from the predictor, select a display unit capable of displaying an image at a position corresponding to the subject from the plurality of display units, and to control the selected display unit to display the guide image at the position corresponding to the subject. 1a plurality of display units;an imaging unit configured to capture a subject;a predictor configured to predict an action of the subject according to a captured image captured by the imaging unit;a guide image generator configured to generate a guide image that guides the subject according to a prediction result from the predictor; anda display controller configured to, on the basis of the prediction result from the predictor, select a display unit capable of displaying an image at a position corresponding to the subject from the plurality of display units, and to control the selected display unit to display the guide image at the position corresponding to the subject.. A display control system comprising: The present application is a continuation of U.S. patent application Ser. No. 15/235,406, filed on Aug. 12, 2016, which is a continuation of U.S. patent application Ser. No. 14/103,032, filed on Dec. 11, 2013, which claims the benefit of Japanese Priority Patent Application JP 2012-279520 filed Dec. 21, 2012, the disclosures of which are hereby incorporated herein by reference.The present disclosure relates to a display control system and a recording medium.Within facilities such as amusement centers, train stations, hospitals, and municipal ...

Подробнее
30-04-2020 дата публикации

STARTUP AUTHENTICATION METHOD FOR INTELLIGENT TERMINAL

Номер: US20200134340A1
Автор: HU GUOHUI
Принадлежит:

Disclosed is a startup authentication method for an intelligent terminal, including first performing face authentication, and continuing to perform gesture-based virtual password authentication after the face authentication, wherein even if the face authentication is cracked, the gesture-based password authentication is required to perform for logging in, so the disclosure can effectively improve the security of authentication. Further, in the disclosure, the gesture-based virtual password authentication is performed based on a gesture image input by a user in the air, so that since there is no need to perform input operations on a screen of the intelligent terminal, the aesthetics of the intelligent terminal will not be affected. Moreover, in the disclosure, when the virtual password is determined by detecting binary images of fingertips, the disturbance of the binary images of the fingertips is also removed, which can improve the probability and efficiency in subsequent detection of the virtual password. 1. A startup authentication method for an intelligent terminal , comprising steps of:initiating the intelligent terminal for startup;performing a face authentication on a user;initiating a user gesture authentication after the face authentication, and capturing a gesture image input by the user in the air;processing the captured gesture image of each frame to extract a fingertip binary image corresponding to the frame, wherein the fingertip binary image comprises a black background image block and a white fingertip image block;starting from a fingertip binary image at a starting frame, detecting a displacement of the fingertip image block at a current frame with respect to the fingertip image block at a previous frame, determining the fingertip image block at the current frame as a perturbed fingertip binary image when the displacement is less than a predetermined threshold, then continuing to detect a displacement of the fingertip image block at a next frame with ...

Подробнее
30-04-2020 дата публикации

INTELLIGENT TERMINAL

Номер: US20200134341A1
Автор: HU GUOHUI
Принадлежит:

Disclosed is an intelligent terminal, for which the startup authentication includes first performing face authentication and continuing to perform gesture-based virtual password authentication after the face authentication, even if the face authentication is cracked, the gesture-based password authentication is required to perform for logging in, and so the intelligent terminal of the disclosure can effectively improve the security of authentication. Further, the gesture-based virtual password authentication is performed based on a gesture image input by a user in the air, so that since there is no need to perform input operations on a screen of the intelligent terminal, the aesthetics of the intelligent terminal will not be affected. Moreover, in the disclosure, when the virtual password is determined by detecting binary images of fingertips, the disturbance of the binary images of the fingertips is also removed, which can improve the probability and efficiency in subsequent detection of the virtual password. 1. An intelligent terminal , comprising:an initiation processing module, configured to initiate the intelligent terminal for startup;a face authentication processing module, configured to perform a face authentication on a user;a gesture image capturing processing module, configured to initiate a user gesture authentication after the face authentication, and capture a gesture image input by the user in the air;a fingertip binary image extraction processing module, configured to process the captured gesture image of each frame to extract a fingertip binary image corresponding to the frame, wherein the fingertip binary image comprises a black background image block and a white fingertip image block;a fingertip binary image validity detection processing module, configured to, starting from a fingertip binary image at a starting frame, detect a displacement of the fingertip image block at a current frame with respect to the fingertip image block at a previous ...

Подробнее
10-06-2021 дата публикации

Method and system for imaging and analysis of anatomical features

Номер: US20210174505A1
Принадлежит: Etreat Medical Diagnostics Inc

A method and system are provided for characterizing a portion of biological tissue. This invention comprises a smartphone and tablet deployable mobile medical application that uses device sensors, internet connectivity and cloud-based image processing to document and analyze physiological characteristics of hand arthritis. The application facilitates image capture and performs image processing that identifies hand fiduciary features and measures hand anatomical features to report and quantify the progress of arthritic disease.

Подробнее
24-05-2018 дата публикации

MULTI-PROCESS INTERACTIVE SYSTEMS AND METHODS

Номер: US20180144030A1
Принадлежит:

A multi-process interactive system is described. The system includes numerous processes running on a processing device. The processes include separable program execution contexts of application programs, such that each application program comprises at least one process. The system translates events of each process into data capsules. A data capsule includes an application-independent representation of event data of an event and state information of the process originating the content of the data capsule. The system transfers the data messages into pools or repositories. Each process operates as a recognizing process, where the recognizing process recognizes in the pools data capsules comprising content that corresponds to an interactive function of the recognizing process and/or an identification of the recognizing process. The recognizing process retrieves recognized data capsules from the pools and executes processing appropriate to contents of the recognized data capsules. 1executing a plurality of processes on at least one processing device;translating events of each process of the plurality of processes into data capsules;transferring the data capsules into a plurality of pools;each process operating as a recognizing process, the recognizing process recognizing in the plurality of pools data capsules comprising at least one of content that corresponds to an interactive function of the recognizing process and an identification of the recognizing process; andthe recognizing process retrieving recognized data capsules from the plurality of pools and executing processing appropriate to contents of the recognized data capsules.. A method comprising: This application is a continuation of U.S. patent application Ser. No. 14/733,125, filed 8 Jun. 2015, which is a continuation of U.S. patent application Ser. No. 12/579,340, filed 14 Oct. 2009, now issued as U.S. Pat. No. 9,063,801, both of which are incorporated in their entirety by this reference.Embodiments are ...

Подробнее
04-06-2015 дата публикации

PROCESSING METHOD OF OBJECT IMAGE FOR OPTICAL TOUCH SYSTEM

Номер: US20150153904A1
Принадлежит:

There is provided a processing method of an object image for an optical touch system includes the steps of: capturing, using a first image sensor, a first image frame containing a first object image; capturing, using a second image sensor, a second image frame containing a second object image; generating a polygon image according to the first image frame and the second image frame; and determining a short axis of the polygon image and at least one object information accordingly. 1. A processing method of an object image for an optical touch system , the optical touch system comprising at least two image sensors configured to capture image frames looking across a touch surface and containing at least one object operating on the touch surface and a processing unit configured to process the image frames , the processing method comprising:capturing, using a first image sensor, a first image frame containing a first object image;capturing, using a second image sensor, a second image frame containing a second object image;generating, using the processing unit, a polygon image according to the first image frame and the second image frame; anddetermining, using the processing unit, a short axis of the polygon image and determining at least one object information accordingly.2. The processing method as claimed in claim 1 , further comprising:generating two straight lines in a two dimensional space associated with the touch surface according to mapping positions of the first image sensor and borders of the first object image in the first image frame in the two dimensional space;generating two straight lines in the two dimensional space according to mapping positions of the second image sensor and borders of the second object image in the second image frame in the two dimensional space; andcalculating a plurality of intersections of the straight lines and generating the polygon image according to the intersections.3. The processing method as claimed in claim 1 , further ...

Подробнее
24-05-2018 дата публикации

SYSTEMS AND METHODS FOR AUTHENTICATING A USER BASED ON A BIOMETRIC MODEL ASSOCIATED WITH THE USER

Номер: US20180145975A1
Принадлежит:

Systems and methods as provided herein may create a biometric model associated with a user. The created biometric model may be used to generate challenges that are presented to the user for authentication purposes. A user response to the challenge may be compared to an expected response, and if the user response matches within a predetermined error of the expected response, the user may be authenticated. The systems and methods may further generate challenges that are adaptively designed to address weaknesses or errors in the created model such that the model is more closely associated with a user and the user is more likely to be the only person capable of successfully responding to the generated challenges. 1. (canceled)2. A device , comprising:a non-transitory memory;one or more sensors configured to detect biometric responses; and providing a challenge to a user, the challenge generated by a biometric model application configured to create a biometric model tailored to the device and the user;', 'detecting, from the one or more sensors, a response from the user to the provided challenge;', 'fitting the response to the biometric model to result in a fitted biometric model;', 'determining whether the fitted biometric model is within a predetermined degree of accuracy to identify the user; and', 'storing, in response to the fitted biometric model being within the predetermined degree of accuracy, the fitted biometric model., 'one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the device to perform operations comprising3. The device of claim 2 , wherein the operations further comprise:determining an error in response to determining that the fitted biometric model is not within the predetermined degree of accuracy; andproviding a subsequent challenge to the user, the subsequent challenge being provided to address the error.4. The device of claim 3 , wherein the operations ...

Подробнее
31-05-2018 дата публикации

OPERATING SYSTEM AND METHOD FOR OPERATING AN OPERATING SYSTEM FOR A MOTOR VEHICLE

Номер: US20180150141A1
Принадлежит: Audi AG

A sensing device sensing at least one body part of a user when the body part is arranged in a sensing region of the sensing device is included in an operator control system that also includes a control device controlling a signal apparatus of the operator control system. The sensing device checks whether the body part sensed in the sensing region is in an operator control space that forms a sub-region of the sensing region. The control device actuates the signal apparatus that is used outside the operator control space to output an acknowledgment when the sensing device senses that the body part is inside the operator control space. 19-. (canceled)11. An operator control system for a motor vehicle , comprising:a signal apparatus;a sensing device configured to sense at least one body part when the at least one body part is disposed in a sensing region of the sensing device, the sensing region being arranged in an interior of the motor vehicle; and to control the signal apparatus of the operator control system,', 'to check whether the at least one body part sensed in the sensing region is in an operator control space forming a sub-region of the sensing region,', 'to actuate the signal apparatus to output an acknowledgment outside the operator control space when the sensing device senses the body part inside the operator control space, and', 'to actuate the signal apparatus to output an identification signal when the sensing device senses the body part inside the sensing region but outside the operator control space., 'a control device configured'}12. The operator control system as claimed in claim 11 , wherein the sensing device is further configured to sense an input gesture in the operator control space and to check whether the input gesture matches at least one prescribed gesture.13. The operator control system as claimed in claim 12 , wherein the control device is further configured to actuate the signal apparatus to output a confirmation signal when the input ...

Подробнее
16-05-2019 дата публикации

IMAGE PROCESSING APPARATUS AND METHOD

Номер: US20190147225A1
Автор: Thodberg Hans Henrik
Принадлежит:

One embodiment of this invention provides an image processing method for use in locating a landmark in an acquired image. The method comprises a method to sample several features from an image patch, and a decision tree, which performs a regression to the location of the landmark relative to the image patch. The image is scanned by extracting an image patch in many translated locations and for each patch applies the regression decision tree to produce one or more votes for the location of the given target point within the acquired image. The method further accumulates the regression votes for all of the patches in the scan to generate a response image corresponding to the given target point. The method finally performs an estimate of the local maxima of the voting map as the likely locations of the landmark. 1. An image processing method for locating a target within an input image , said method comprising:a) providing a regression decision tree defined by a plurality of nodes, the plurality of nodes including decision nodes and leaf nodes, the leaf nodes being indicative of respective predicted locations of the target, each decision node having associated with it a decision rule wherein each associated decision rule has associated with it a selected image feature selected from a set of predetermined image features; wherein each selected image feature is chosen from said predetermined set of image features such that an associated decision rule results in an optimal performance measure compared to all other image features of said predetermined set of image features;b) selecting multiple sampling areas within the input image; computing respective detection scores for one or more of the set of predetermined image features;', 'using said regression decision tree and said computed detection scores to compute one or more regression votes, each regression vote being indicative of a predicted location of the target within the input image;, 'c) for each sampling area of the ...

Подробнее
16-05-2019 дата публикации

Human Body Posture Data Acquisition Method and System, and Data Processing Device

Номер: US20190147237A1
Автор: Wang Su, Yao Yao

Provided are a human body posture data acquisition method and system, and a data processing device. The method comprises: obtaining feature data between pre-calibrated human body feature points; obtaining a rotational angle of the human body feature points; and obtaining human body posture data according to the rotational angle of the human body feature points and the feature data between the human body feature points. In the method, head rotation data is obtained by directly providing a head wearing device on the head. Accordingly, body rotation data is obtained by providing a body wearing device on the human chest, and then human body posture data is obtained via the head rotation data and the body rotation data, thereby alleviating the problem that an error in sensed data is caused with the existing human body posture identification methods due to a poor mobility, sensitivity to environmental impacts and susceptibility to disturbances. 1. A human body posture data acquisition method , applicable to a human body posture data acquisition system , wherein the human body posture data acquisition system comprises a data processing device , and the human body posture data acquisition method comprises:the data processing device obtaining feature data between pre-calibrated human body feature points, wherein the pre-calibrated human body feature points comprise a human head center point, a head rotation center point and a body rotation center point, and the feature data comprises a length of a first line segment between the head center point and the head rotation center point, and a length of a second line segment between the head rotation center point and the body rotation center point;obtaining rotational angles of the human body feature points, wherein the rotational angles comprise a first rotational angle of the first line segment relative to a vertical direction with the first line segment between the head center point and the head rotation center point, and a ...

Подробнее
31-05-2018 дата публикации

BIOMETRIC AUTHENTICATION APPARATUS, BIOMETRIC AUTHENTICATION METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20180150687A1
Принадлежит: FUJITSU LIMITED

A biometric authentication apparatus acquires biometric information of a user, extracts a boundary candidate where a state of the biometric information changes, to extract a region in a vicinity of the boundary candidate and having a threshold area or greater, extracts a state feature quantity having a value that changes according to a change in the state of the biometric information, from the extracted region, and judges the state of the biometric information using the state feature quantity of the extracted region. 1. A biometric authentication apparatus comprising:a memory configured to store a program; and acquiring biometric information of a user;', 'extracting a boundary candidate where a state of the biometric information changes, to extract a region in a vicinity of the boundary candidate and having a threshold area or greater;', 'extracting a state feature quantity having a value that changes according to a change in the state of the biometric information, from the extracted region; and', 'judging the state of the biometric information using the state feature quantity of the extracted region., 'a processor configured to execute the program to perform a process including'}2. The biometric authentication apparatus as claimed in claim 1 , wherein the extracting the boundary candidate extracts a region surrounded by a plurality of boundary candidates when the plurality of boundary candidates are extracted.3. The biometric authentication apparatus as claimed in claim 2 , wherein the biometric information is a palm image of a palm of a hand of the user that is captured claim 2 , and the biometric state is an open or closed state of the palm.4. The biometric authentication apparatus as claimed in claim 3 , wherein the state feature quantity includes information indicating differences in biometric features of skin at the palm and at a back of fingers of the hand.5. The biometric authentication apparatus as claimed in claim 4 , wherein the judging judges claim 4 , ...

Подробнее
07-05-2020 дата публикации

GESTURE JUDGMENT DEVICE, GESTURE OPERATION DEVICE, AND GESTURE JUDGMENT METHOD

Номер: US20200143150A1
Принадлежит: Mitsubishi Electric Corporation

A gesture judgment device includes a reference part detection unit that outputs reference part information indicating a reference part region, a movement extraction unit that outputs movement information indicating a movement region, a reference part disappearance judgment unit that generates a reference part disappearance information, a timing judgment unit that judges whether first timing indicated by the reference part disappearance information and second timing of occurrence of a frame in which the movement region overlaps with the reference part region indicated by the reference part information are synchronized with each other or not and outputs a timing judgment result as the result of the judging, and an operation judgment unit that judges contents of a gesture operation performed by an operator based on the timing judgment result and the movement information. 1. A gesture judgment device for judging contents of a gesture operation performed by an operator , comprising:a reference part detection unit to detect a reference part in a plurality of frame images successively acquired as captured images and to output reference part information indicating a reference part region where the reference part exists in regard to each of the plurality of frame images;a movement extraction unit to extract movement between frame images in the plurality of frame images and to output movement information indicating a movement region where the movement occurred;a reference part disappearance judgment unit to generate reference part disappearance information, indicating first timing of occurrence of a frame image in which the reference part is not detected, based on a result of the detecting indicated by the reference part information;a timing judgment unit to judge whether the first timing indicated by the reference part disappearance information and second timing of occurrence of a frame in which the movement region indicated by the movement information and the reference part ...

Подробнее
17-06-2021 дата публикации

Systems and Methods of Tracking Moving Hands and Recognizing Gestural Interactions

Номер: US20210181859A1
Принадлежит: Ultrahaptics IP Two Limited

The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual. 1. A method of determining command input to a machine responsive to gestures in three dimensional (3D) sensory space , the method comprising:{'b': 1', '0, 'claim-text': pairing point sets from points on a surface of the observation information with points on the 3D capsules, wherein normal vectors to points on the set of observation information are parallel to normal vectors to points on the 3D capsules; and', 'determining the variance comprising a reduced root mean squared deviation (RMSD) of distances between paired point sets; and, 'determining a variance between a point on a set of observation information based on an image captured at time t and a corresponding point on at least one of a set of 3D capsules fitted to another set of observation information based on an image captured at time t byresponsive to the variance, adjusting the 3D capsules; anddetermining a gesture performed by the at least a portion of a hand based on the 3D capsules as adjusted; andinterpreting the gesture as providing command input to a machine.2. The method of claim 1 , wherein adjusting the 3D capsules further includes improving conformance of the 3D capsules to at least one of length claim 1 , width claim 1 , orientation claim 1 , and arrangement of portions of the observation information.3. The method of claim 1 , further including:determining span modes of the hand, wherein the span modes include at least a finger width span mode and a palm width span ...

Подробнее
11-06-2015 дата публикации

MULTIPLE LAYER BLOCK MATCHING METHOD AND SYSTEM FOR IMAGE DENOISING

Номер: US20150161436A1
Автор: Fan Zhigang
Принадлежит: XEROX CORPORATION

This disclosure provides a method, system and computer program product for denoising an image by extending a Block Matching and 3D Filtering algorithm to include decomposition of high contrast image blocks into multiple layers that are collaboratively filtered. According to an exemplary method, the high contrast image blocks are decomposed into a top layer, a bottom layer and a mask layer. 1. A computer implemented MLBM3D (Multiple Layer Block Matching 3D Filtering) method of denoising an image including high contrast regions comprising:a) receiving a noisy image;b) generating a plurality of blocks covering the noisy image;c) selecting one of the plurality of blocks as a reference block and grouping other blocks including similar image representations with the selected reference block to form a block cluster;d) collaboratively filtering the block cluster to form a single-layer estimate of the block cluster;e) classifying the block cluster as one of a single-layer block cluster and a multi-layer block cluster based on the single-layer estimate;f) for a multi-layer block cluster, decompose each of the blocks associated with the multi-layer block cluster into multiple layers;g) collaboratively filtering at least two of the multiple layers to form at least two, respective, filtered block layers;h) assembling the filtered block layers to form a multi-layer estimate of the block cluster;i) performing steps c)-h) for a plurality of the reference blocks; andj) aggregating the single-layer estimates of the classified single-layer block clusters and the multiple-layer estimates of the classified multi-layer block clusters to form a denoised image representation of the noisy image.2. The computer-implemented BM3D method of denoising an image according to claim 1 , wherein step i) performs steps c)-h) for all of the plurality of the reference blocks.3. The computer-implemented MLBM3D method of denoising an image according to claim 1 , wherein the received noisy image is ...

Подробнее
11-06-2015 дата публикации

IMAGE PROCESSOR COMPRISING GESTURE RECOGNITION SYSTEM WITH COMPUTATIONALLY-EFFICIENT STATIC HAND POSE RECOGNITION

Номер: US20150161437A1
Принадлежит:

An image processing system comprises an image processor having image processing circuitry and an associated memory. The image processor is configured to implement a gesture recognition system comprising a static pose recognition module. The static pose recognition module is configured to identify a hand region of interest in at least one image, to perform a skeletonization operation on the hand region of interest, to determine a main direction of the hand region of interest utilizing a result of the skeletonization operation, to perform a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation, and to recognize a static pose of the hand region of interest based on the estimated hand features. 1. A method comprising steps of:identifying a hand region of interest in at least one image;performing a skeletonization operation on the hand region of interest;determining a main direction of the hand region of interest utilizing a result of the skeletonization operation;performing a scanning operation on the hand region of interest utilizing the determined main direction to estimate a plurality of hand features that are substantially invariant to hand orientation; andrecognizing a static pose of the hand region of interest based on the estimated hand features;wherein the steps are implemented in an image processor comprising a processor coupled to a memory.2. The method of wherein the steps are implemented in a static pose recognition module of a gesture recognition system of the image processor.3. The method of wherein the static pose recognition module operates at a lower frame rate than at least one other recognition module of the gesture recognition system.4. The method of wherein identifying a hand region of interest comprises generating a hand image comprising a binary region of interest mask in which pixels within the hand region of interest ...

Подробнее