Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 4418. Отображено 100.
18-04-2013 дата публикации

Method and Apparatus for Projective Volume Monitoring

Номер: US20130094705A1
Принадлежит: Omron Corporation

According to one aspect of the teachings presented herein, a projective volume monitoring apparatus is configured to detect objects intruding into a monitoring zone. The projective volume monitoring apparatus is configured to detect the intrusion of objects of a minimum object size relative to a protection boundary, based on an advantageous processing technique that represents range pixels obtained from stereo correlation processing in spherical coordinates and maps those range pixels to a two-dimensional histogram that is defined over the projective coordinate space associated with capturing the stereo images used in correlation processing. The histogram quantizes the horizontal and vertical solid angle ranges of the projective coordinate space into a grid of cells. The apparatus flags range pixels that are within the protection boundary and accumulates them into corresponding cells of the histogram, and then performs clustering on the histogram cells to detect object intrusions. 1. A method of detecting objects intruding into a monitoring zone , said method performed by a projective volume monitoring apparatus and comprising:capturing a stereo image from a pair of image sensors;correlating the stereo image to obtain a depth map comprising range pixels represented in three-dimensional Cartesian coordinates;converting the range pixels into spherical coordinates, so that each range pixel is represented as a radial distance along a respective pixel ray and a corresponding pair of solid angle values within the horizontal and vertical fields of view associated with capturing the stereo image;obtaining a set of flagged pixels by flagging those range pixels that fall within a protection boundary defined for the monitoring zone;accumulating the flagged pixels into corresponding cells of a two-dimensional histogram that quantizes the solid angle ranges of the horizontal and vertical fields of view; andclustering cells in the histogram to detect intrusions of objects within ...

Подробнее
18-04-2013 дата публикации

POSITIONAL LOCATING SYSTEM AND METHOD

Номер: US20130094710A1
Принадлежит: AVIDASPORTS, LLC

A method and system are disclosed for locating or otherwise generating positional information for an object, such as but not limited generating positional coordinates for an object attached to an athlete engaging in an athletic event. The positional coordinates may be processed with other telemetry and biometrical information to provide real-time performance metrics while the athlete engages in the athletic event. 1. A method of positionally identifying people and objects within a defined space , the method comprising:associating an identification generated for each person or object with a device to be worn or mounted while the person or object moves within the defined space;controlling a beacon included within each device to emit a signal at an interval specified within a beacon transmission schedule;controlling an instrument to record images representative of at least a portion of the defined space, each image plotting recorded signals within a two-dimensional field defined by a viewing angle of the camera;calculating image-based positional coordinates for each signal appearing within each of the captured images, the image-based positional coordinates defining spatial positioning of the beacons emitting the signals relative to the two-dimensional field of each image;reducing the image-based positional coordinates to defined space-based positional coordinates, the defined space-based positional coordinates defining spatial positioning of the beacons emitting the signals within at least a two-dimensional coordinate system defined relative to at least a length and width of the defined space; andfor each of the images, identifying the person or object at each of the defined space-based positional coordinates based on the identification of the person or object scheduled to emit signals at the time the image was captured.2. The method of further generating the beacon transmission schedule such that each beacon emits the signal only at the intervals during which no other ...

Подробнее
18-04-2013 дата публикации

METHOD AND SYSTEM OF AUTOMATIC DETERMINATION OF GEOMETRIC ELEMENTS FROM A 3D MEDICAL IMAGE OF A BONE

Номер: US20130094732A1
Принадлежит: A2 SURGICAL

The invention relates to an automated method for precise determination of the head center and radius and the neck axis of an articulated bone from acquired 3D medical image of an articulation, comprising the following steps: i) determining, from a 3D image of the bone, an approximate sphere (SFO) of the head of the bone that substantially fits the spherical portion of the head of the bone; ii) constructing from the 3D image and the approximate sphere (SFO), a 3D surface model (S) of the bone; iii) determining, from the 3D surface model (S) and from the approximate sphere (SFO), an approximate neck axis (AXO) of the neck of the bone; iv) determining, from the 3D surface model (S) and the approximate sphere (SFO), a precise sphere (SF); v) determining, from the 3D surface model (S), the precise sphere (SF) and the approximate neck axis (AXO), a precise neck axis (AX). 1. An automated method for precise determination of the head center and radius and the neck axis of an articulated bone from acquired 3D medical image of an articulation , the articulation comprising two bones one of which is said bone with a head and a neck , the method comprising the following steps:i) determining automatically, from a 3D image of the bone having a head and a neck, an approximate sphere of the head of the bone defined by an approximate head center and an approximate radius, that substantially fits the spherical portion of the head of the bone;ii) constructing automatically from the 3D image and from the approximate sphere of the head of the bone, a 3D surface model of the bone;iii) determining automatically, from the 3D surface model of the bone and from the approximate sphere of the head of the bone, an approximate neck axis of the neck of the bone;iv) determining automatically, from the 3D surface model and from the approximate sphere of the head of the bone, a precise sphere defined by a precise head center and a precise radius of the head of the bone;v) determining automatically, ...

Подробнее
18-04-2013 дата публикации

METHOD AND DEVICE FOR EVALUATING EVOLUTION OF TUMOURAL LESIONS

Номер: US20130094743A1
Принадлежит:

A new method () for evaluating evolution of tumoural lesions includes: —providing () a first image of the tumoural lesions, the first image being made at a first time instant; —providing () a second image of the tumoural lesions, the second image being made at a second time instant that is later than the first time instant; —delineating () a border of the tumoural lesions in the first image and the second image; —registration () of the tumoural lesions in the first image the second image; —segmenting () the tumoural lesions in the first image and the second image into concentric areas; —quantifying () changes of at least one functional parameter between the concentric areas in the first image and respective corresponding concentric areas in the second image; and —visualizing () the changes in a two-dimensional or three-dimensional model of the tumoural lesions. 1. A method for evaluating evolution of tumoural lesions , said method comprising the steps of:providing a first image of said tumoural lesions, said first image being made at a first time instant;providing a second image of said tumoural lesions, said second image being made at a second time instant that is later than said first time instant;delineating a border of said tumoural lesions in said first image and said second image;registration of said tumoural lesions in said first image and said second image;segmenting said tumoural lesions in said first image and said second image into concentric areas;quantifying changes of at least one functional parameter between said concentric areas in said first image and respective corresponding concentric areas in said second image; andvisualizing said changes in a two-dimensional or three-dimensional model of said tumoural lesions.2. A method for evaluating evolution of tumoural lesions according to claim 1 , wherein quantifying changes of at least one functional parameter comprises voxel-by-voxel hot spot analysis.3. A method for evaluating evolution of tumoural ...

Подробнее
25-04-2013 дата публикации

Method and device for locating persons in a prescribed area

Номер: US20130101165A1
Принадлежит: ROBERT BOSCH GMBH

The invention relates to a method and device for locating persons ( 12, 14 ) in a prescribed area ( 10 ) monitored by at least one image acquisition device ( 3 ), wherein the image acquisition device ( 3 ) continuously generates images of the prescribed monitored area ( 10 ), said images being analyzed and evaluated by means of at least one image-processing method and/or image analysis method, and to a computer program product and data processing program. According to the invention, the generated images of the prescribed area ( 10 ) are analyzed and evaluated for detecting and locating persons ( 12, 14 ), wherein detected and located persons ( 12, 14 ) are classified and associated with at least one prescribed group, wherein the association with a group is performed depending on prescribed clothing features.

Подробнее
25-04-2013 дата публикации

CORONARY ARTERY MOTION MODELING

Номер: US20130101187A1
Автор: GAO Yang, Sundar Hari
Принадлежит: Siemens Corporation

A method for tracking coronary artery motion includes constructing () a centerline model of a vascular structure in a base phase image in a sequence of 2D images of coronary arteries acquired over a cardiac phase, computing (), for each pixel in a region-of-interest in each subsequent image, a velocity vector that represent a change in position between the subsequent image and base phase image, calculating () positions of control points in each phase using the velocity vectors, and applying () PCA to a P×2N data matrix Xconstructed from position vectors (x, y) of N centerline control points for P phases to identify d eigenvectors corresponding to the largest eigenvalues of XXto obtain a d-dimensional linear motion model {circumflex over (α)}, in which a centerline model for a new image at phase p+1 is estimated by adding {circumflex over (α)}to each centerline control point of a previous frame at phase p. 2. The method of claim 1 , wherein said centerline model is parametrically represented by a set of vessel segments connected by a set of control points claim 1 , wherein each vessel segment is approximated by a 2D B-spline curve parameterized by chord length.4. The method of claim 1 , wherein constructing said centerline model comprises processing said base phase image with a Hessian-based vessel enhancement filter claim 1 , and computing centerlines by numerically integrating a directional vector field obtained from the Hessian-based vessel enhancement filter.5. The method of claim 4 , further comprising receiving manual adjustments to the centerline model.7. The method of claim 6 , wherein the isotropy compensated orientation tensor T claim 6 , is calculated by stacking all images of the sequence of 2D images onto each other to form a 3D image f claim 6 , and defining Tas T=AA+γbb−λ claim 6 ,I claim 6 , wherein I is the identity matrix claim 6 , λis the smallest eigenvalue of {tilde over (T)}=AA+γbb claim 6 , Aand bare found by fitting the image intensity f(x ...

Подробнее
25-04-2013 дата публикации

ATTRIBUTE DETERMINING METHOD, ATTRIBUTE DETERMINING APPARATUS, PROGRAM, RECORDING MEDIUM, AND ATTRIBUTE DETERMINING SYSTEM

Номер: US20130101224A1
Автор: Ueki Kazuya
Принадлежит: NEC SOFT, LTD.

The present invention is to provide an attribute determining method, an attribute determining apparatus, a program, a recording medium, and an attribute determining system of high detection accuracy with which an attribute of a person can be determined even in the case where a person is not facing nearly the front. 1. An attribute determining method comprising:an image acquiring step of acquiring an image to be determined;a head region detecting step of detecting a head region from the image to be determined; andan attribute determining step of determining an attribute based on an image of the head.2. The method according to claim 1 , further comprising:an alignment step of aligning the head region detected in the head region detecting step.3. The method according to claim 1 , whereinin the head region detecting step, the head region is detected from the image to be determined by referring to at least one of a head detection model acquired preliminarily and a head determination rule; andin the attribute determining step, the attribute is determined based on the image of the head by referring to at least one of an attribute determination model acquired preliminarily and an attribute determination rule.4. The method according to claim 1 , wherein the attribute determining step comprises:a whole attribute determining step of determining the attribute based on the whole head region;a partial detecting step of detecting a part of the head region;a partial attribute determining step of determining the attribute based on the part of the head region; anda combining step of combining determination results obtained in the whole attribute determining step and the partial attribute determining step.5. The method according to claim 4 , whereinin the partial attribute determining step, the attribute of the part of the head region is determined by referring to at least one of a partial attribute determination model acquired preliminarily and a partial attribute determination rule. ...

Подробнее
23-05-2013 дата публикации

Body scan

Номер: US20130129169A1
Принадлежит: Microsoft Corp

A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.

Подробнее
23-05-2013 дата публикации

IMAGE RECONSTRUCTION USING DATA ORDERING

Номер: US20130129176A1
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS N.V.

Methods, systems and apparatuses for processing data associated with nuclear medical imaging techniques are provided. Data is ordered in LUT's and memory structures. Articles of manufacture are provided for causing computers to carry out aspects of the invention. Data elements are ordered into a plurality of ordered data groups according to a spatial index order, and fetched and processed in the spatial index order. The data elements include sensitivity matrix elements, PET annihilation event data, and system and image matrix elements, the data grouped in orders corresponding to their processing. In one aspect geometric symmetry of a PET scanner FOV is used in ordering the data and processing. In one aspect a system matrix LUT comprises total number of system matrix elements equal to a total number of image matrix elements divided by a total number of possible third index values. 1. A system , comprising:a memory containing an ordered LUT comprising a plurality of data elements ordered relative to a geometric index in a spatial index order; anda processor;wherein the processor is configured to fetch each of a plurality of data elements from the LUT and process each of the plurality of data elements in the spatial index order.2. The system of claim 1 , wherein the plurality of data elements is a plurality of system matrix elements;wherein the processor is a reconstructor; andwherein the reconstructor is configured to fetch each of a plurality of system matrix elements from the LUT and generate volumetric data responsive to data indicative of emission events in an object under examination by processing each of an ordered group of image matrix elements with the system matrix element in an image matrix update order, the image matrix update order common with an order of storage of the image matrix elements in the memory.3. The system of wherein the LUT comprises a total number of system matrix elements claim 2 , the total number of system matrix elements equal to a total ...

Подробнее
30-05-2013 дата публикации

APPARATUS AND METHOD FOR CONTROLLING PRESENTATION OF INFORMATION TOWARD HUMAN OBJECT

Номер: US20130136304A1
Принадлежит: CANON KABUSHIKI KAISHA

A human object recognition unit recognizes a human object included in a captured image data. A degree-of-interest estimation unit estimates a degree of interest of the human object in acquiring information, based on a recognition result obtained by the human object recognition unit. An information acquisition unit acquires information as a target to be presented to the human object. An information editing unit generates information to be presented to the human object from the information acquired by the information acquisition unit, based on the degree of interest estimated by the degree-of-interest estimation unit. An information display unit outputs the information generated by the information editing unit. 1. An information processing apparatus comprising:a recognition unit configured to recognize a human object included in a captured image data;an estimation unit configured to estimate a degree of interest of the human object in acquiring information, based on a recognition result obtained by the recognition unit;an acquisition unit configured to acquire information as a target to be presented to the human object;a generation unit configured to generate information to be presented to the human object from the information acquired by the acquisition unit, based on the degree of interest estimated by the estimation unit; anda control unit configured to cause an output unit to output the information generated by the generation unit.2. The information processing apparatus according to claim 1 , wherein the recognition unit is configured to recognize an orientation of the human object included in the image data.3. The information processing apparatus according to claim 1 , wherein the recognition unit is configured to recognize a facial expression of the human object included in the image data.4. The information processing apparatus according to claim 1 , wherein the recognition unit is configured to recognize at least one of a size of a pupil and a size of an eye of ...

Подробнее
30-05-2013 дата публикации

MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20130136326A1
Принадлежит: CANON KABUSHIKI KAISHA

A medical image processing apparatus includes a unit configured to analyze a target medical image, a unit configured to register information representing an aptitude of each doctor with respect to interpretation of a specific lesion and a modality used by each doctor, and a unit configured to, when the analysis result includes information associated with a lesion, decide an assigned doctor based on information representing the aptitude of each doctor with respect to interpretation of the specific lesion, and, when the analysis result includes no information associated with a lesion, decide an assigned doctor based on the modality. 18-. (canceled)9. A medical image processing apparatus comprising:a processor; and analyze a medical image obtained by imaging an object; and', 'decide a doctor to interpret the medical image of a plurality of doctors based on a result of the analysis,', 'wherein the instructions, when executed by the processor, further cause the apparatus to change information to be used for the decision in accordance with whether or not the result of the analysis indicates that the medical image includes a lesion., 'a memory storing instructions that, when executed by the processor, cause the apparatus to10. A medical image processing apparatus comprising:a processor; and analyze a medical image obtained by imaging an object; and', 'decide a doctor to interpret the medical image of a plurality of doctors based on a result of the analysis., 'a memory storing instructions that, when executed by the processor, cause the apparatus to11. The medical image processing apparatus according to claim 10 , wherein the result of the analysis includes information indicating the presence or absence of the lesion and information indicating identification difficulty of the lesion.12. The medical image processing apparatus according to claim 10 , wherein the result of the analysis includes information indicating the presence or absence of the lesion and information ...

Подробнее
20-06-2013 дата публикации

Method for Administering a Drug Program to Determine Whether an Animal Has Been Given a Drug

Номер: US20130156272A1
Принадлежит: JBS USA LLC

Systems and methods are described that provide a fast and simple way of administering a drug program related to an animal. Specifically, systems are provided that can receive, compile and analyze information regarding the condition of an organ in a form that is readily readable, transferable to others, and associated with, or linked to, other information such as the presence or absence of an administered drug, combination of drugs, or drug program.

Подробнее
27-06-2013 дата публикации

APPARATUS FOR MANAGING MEDICAL IMAGE DATA USING REFERENCE COORDINATES AND METHOD THEREOF

Номер: US20130163835A1
Автор: Park Seung Chul
Принадлежит: INFINITT HEALTHCARE CO. LTD.

Disclosed herein is an apparatus and method for managing medical image data using reference coordinates. The medical image data management method includes receiving medical image data including a plurality of medical image slices, causing the received medical image data to correspond to a preset reference coordinate system for a human body, and generating relative coordinates corresponding to the reference coordinate system for at least part of the plurality of medical image slices. The medical image data management method further includes storing the generated relative coordinates and at least part of the plurality of medical image slices so that the relative coordinates match the at least part of the medical image slices. Accordingly, the present invention can easily manage the medical image data of an examinee and easily perform matching between the slices of different pieces of medical image data or the display of the slices. 1. A method of managing medical image data , comprising:a) receiving, by a processor, medical image data including a plurality of medical image slices;b) causing, by the processor, the received medical image data to correspond to a preset reference coordinate system for a human body; andc) generating, by the processor, relative coordinates corresponding to the reference coordinate system for at least part of the plurality of medical image slices.2. The method of claim 1 , wherein:a) is configured to receive unique information of an examinee together with the medical image data, andcomprises:selecting, by the processor, a human body reference coordinate model corresponding to the unique information of the examinee from among a plurality of pre-stored human body reference coordinate models, based on the unique information; andcomparing, by the processor, the selected human body reference coordinate model with the medical image data.3. The method of claim 2 , wherein the human body reference coordinate models are generated in consideration of ...

Подробнее
04-07-2013 дата публикации

METHOD AND APPARATUS FOR ESTIMATING ORGAN DEFORMATION MODEL AND MEDICAL IMAGE SYSTEM

Номер: US20130170725A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method of estimating an organ deformation model includes generating at least one 3D organ shape model of an organ of a subject based on at least one non-real time medical image representing a deformation state of the organ of the subject; generating a deformation space for the organ of the subject based on the at least one 3D organ shape model and prior knowledge regarding the organ; and estimating a 3D organ deformation model of the organ of the subject based on a real-time medical image of the organ of the subject and the deformation space. 1. A method of estimating an organ deformation model , the method comprising:generating at least one 3D organ shape model of an organ of a subject based on at least one non-real time medical image representing a deformation state of the organ of the subject;generating a deformation space for the organ of the subject based on the at least one 3D organ shape model and prior knowledge regarding the organ; andestimating a 3D organ deformation model of the organ of the subject based on a real-time medical image of the organ of the subject and the deformation space.2. The method of claim 1 , wherein the generating of the at least one 3D organ shape model comprises generating at least two 3D organ shape models all having a same topology.3. The method of claim 2 , wherein connections between vertexes constituting the at least two 3D organ shape models all having the same topology and edges connecting the vertexes are the same in each of the at least two 3D organ shape models all having the same topology.4. The method of claim 1 , wherein the generating of the deformation space comprises generating the deformation space by performing an interpolation operation on a deformation space defined based on the at least one 3D organ shape model using the prior knowledge as a limitation condition.5. The method of claim 1 , wherein the estimating of the 3D organ deformation model comprises estimating the 3D organ deformation model of the organ ...

Подробнее
11-07-2013 дата публикации

Exterior environment recognition device and exterior environment recognition method

Номер: US20130177205A1
Автор: Seisuke Kasaoki
Принадлежит: Fuji Jukogyo KK

There are provided an environment recognition device and an environment recognition method. An exterior environment recognition device obtains an image in a detection area, generates a block group by grouping, based on a first relative relationship between blocks, multiple blocks in an area extending from a plane corresponding to a road surface to a predetermined height in the obtained image, divides the block group into two in a horizontal direction of the image, and determines, based on a second relative relationship between two divided block groups, whether the block group is a first person candidate which is a candidate of a person.

Подробнее
11-07-2013 дата публикации

SYSTEMS, METHODS AND COMPUTER READABLE STORAGE MEDIA STORING INSTRUCTIONS FOR GENERATING AN IMAGE SERIES

Номер: US20130177222A1
Принадлежит:

Systems, methods, and computer-readable storage media relate to generate an image series that includes a patient image and a medical image. The patient image and the medical image may be associated based on identification information. 1. A method for generating an image series , comprising:receiving at least one patient image of a patient;receiving identification information;receiving at least one medical image of a patient;associating the patient image and the medical image based on the identification information; andgenerating an image series that includes the patient image and the medical image.2. The method according to claim 1 , wherein the identification information includes at least one of an identifier for a recording device that records the medical image claim 1 , an identifier for a medical imaging device or system that generates the medical image claim 1 , acquisition time claim 1 , image information claim 1 , image practitioner information claim 1 , or patient information.3. The method according to claim 1 , the identification information is included in a DICOM object.4. The method according to claim 1 , further comprising receiving a plurality of different medical images based on the identification information.5. The method according to claim 4 , wherein the plurality of different medical images have an acquisition time within a specific time period.6. The method according to claim 4 , further comprising: comparing the plurality of different medical images to the patient image.7. The method according to claim 6 , further comprising: receiving identification information for each of the plurality of medical images.8. The method according to claim 7 , further comprising determining the medical image that corresponds to the patient image based on the recording device identifier and acquisition time.9. The method according to claim 1 , further comprising confirming that the image series is properly associated with at least one of a medical record claim 1 , ...

Подробнее
11-07-2013 дата публикации

FETUS MODELING METHOD AND IMAGE PROCESSING APPARATUS THEREFOR

Номер: US20130177223A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An image processing apparatus includes: an image receiver which receives a predetermined image obtained by photographing a fetus; and a controller which detects a head region and a torso region of the fetus from the predetermined image, and which models a shape of the fetus by using at least one of a first contoured shape corresponding to the detected head region, a second contoured shape corresponding to the detected torso region, a first axis that is the central axis of the detected head region, and a second axis that is the central axis of the detected torso region, to model the fetus so that biometric data of the fetus can be easily measured. 1. An image processing apparatus comprising:an image receiver which receives a predetermined image obtained by photographing a fetus; anda controller which detects a head region and a torso region of the fetus from the predetermined image, and which models a shape of the fetus by using at least one of a first contoured shape which corresponds to the detected head region, a second contoured shape which corresponds to the detected torso region, a first axis that is a central axis of the detected head region, and a second axis that is a central axis of the detected torso region.2. The image processing apparatus of claim 1 , wherein the controller comprises:a region detector which detects the head region and the torso region of the fetus from the predetermined image; anda fetus modeler which sets each of the first axis and the second axis and which models a shape of the fetus by using at least one of the first axis, the second axis, the first contoured shape, and the second contoured shape.3. The image processing apparatus of claim 1 , wherein the first contoured shape includes a circle claim 1 , and the second contoured shape includes an oval.4. The image processing apparatus of claim 2 , wherein the region detector acquires edge information from the predetermined image and detects the head region and the torso region of the ...

Подробнее
18-07-2013 дата публикации

SYSTEM AND METHOD FOR BUILDING AUTOMATION USING VIDEO CONTENT ANALYSIS WITH DEPTH SENSING

Номер: US20130182905A1
Принадлежит: OBJECTVIDEO, INC.

A method and system for monitoring buildings (including houses and office buildings) by performing video content analysis based on two-dimensional image data and depth data are disclosed. Occupation and use of such buildings may be monitored with higher accuracy to provide higher energy efficiency usage, to control operation of components therein, and/or provide better security.. Height data may be obtained from depth data to provide greater reliability in object detection, object classification and/or event detection. 1. A method of monitoring a building comprising:taking a video within a location in the building with a video sensor, the video comprising a plurality of frames, each frame including image data;for each frame, receiving depth data associated with the image data, the depth data corresponding to one or more distances from the video sensor to features represented by the image data;analyzing the image data and depth data to detect and classify one or more objects depicted in the video, classification of the one or more objects comprising determining whether at least some of the one or more objects are people;counting a number of people based on the analyzing of the image data and the depth data; andcontrolling a system of the building in response to the number of people counted.2. The method of claim 1 , whereincontrolling a system comprises at least one of turning on lights and turning off lights of the building.3. The method of claim 1 , whereincontrolling a system comprises setting a thermostat temperature of a heating system or a cooling system of the building.4. The method of claim 1 , wherein determining whether at least some of the one or more objects are people comprising determining a height of the one or more objects.5. The method of claim 5 , further comprising providing an alert that at least one of a room or area is utilized when a number of people counted exceeds a predetermined value.6. The method of claim 5 , comprising providing an alert ...

Подробнее
18-07-2013 дата публикации

APPARATUS AND METHOD FOR ANALYZING BODY PART ASSOCIATION

Номер: US20130182958A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An apparatus and method for analyzing body part association. The apparatus and method may recognize at least one body part from a user image extracted from an observed image, select at least one candidate body part based on association of the at least one body part, and output a user pose skeleton related to the user image based on the selected at least one candidate body part. 1. A body part association analysis apparatus , comprising:an image extraction unit to extract a user image from an observed image;a body recognition unit to recognize at least one body part from the extracted user image;a body selection unit to select at least one candidate body part from the extracted user image based on the recognized at least one body part; anda pose determination unit to output a user pose skeleton related to the extracted user image based on the selected at least one candidate body part.2. The body part association analysis apparatus of claim 1 , wherein the pose determination unit comprises:a connection structure recognition unit to extract a body part recognition result based on a sequential connection structure of a human body using the selected at least one candidate body part; anda candidate bone generation unit to generate at least one candidate bone using the body part recognition result.3. The body part association analysis apparatus of claim 1 , wherein the user image comprises a difference image showing a difference between a background of the observed image and a current image.4. The body part association analysis apparatus of claim 2 , wherein the candidate bone generation unit generates the at least one candidate bone by defining at least one association group of correlated candidate body parts among the at least one candidate body part.5. The body part association analysis apparatus of claim 4 , wherein the candidate bone generation unit generates the candidate bone by connecting adjacent candidate body parts with respect to a body structure claim 4 , ...

Подробнее
01-08-2013 дата публикации

APPARATUS AND METHOD FOR ESTIMATING JOINT STRUCTURE OF HUMAN BODY

Номер: US20130195330A1

Disclosed herein is an apparatus and method for estimating the joint structure of a human body. The apparatus includes a multi-view image acquisition unit for receiving multi-view images acquired by capturing a human body. A human body foreground separation unit extracts a foreground region corresponding to the human body from the acquired multi-view images. A human body shape restoration unit restores voxels indicating geometric space occupation information of the human body using the foreground region corresponding to the human body, thus generating voxel-based three-dimensional (D) shape information of the human body. A skeleton information extraction unit generates D skeleton information from the generated voxel-based D shape information of the human body. A skeletal structure estimation unit estimates positions of respective joints from a skeletal structure of the human body using both the generated D skeleton information and anthropometric information. 1. An apparatus for estimating a joint structure of a human body , comprisinga multi-view image acquisition unit for receiving multi-view images acquired by capturing a human body;a human body foreground separation unit for extracting a foreground region corresponding to the human body from the acquired multi-view images;a human body shape restoration unit for restoring voxels indicating geometric space occupation information of the human body using the foreground region corresponding to the human body, thus generating voxel-based three-dimensional (3D) shape information of the human body;a skeleton information extraction unit for generating 3D skeleton information from the generated voxel-based 3D shape information of the human body; anda skeletal structure estimation unit for estimating positions of respective joints from a skeletal structure of the human body using both the generated 3D skeleton information and anthropometric information.2. The apparatus of claim 1 , wherein the skeleton information ...

Подробнее
08-08-2013 дата публикации

IMAGE PROCESSING APPARATUS AND METHOD

Номер: US20130202175A1
Автор: LEE KWANG-HEE
Принадлежит: SAMSUNG MEDISON CO., LTD.

An image processing apparatus and method. The image processing apparatus includes: a data acquisition device for acquiring image data of a subject including a target bone; and a data processor for acquiring binary image data by performing thresholding based on the image data, segmenting the binary image data into a plurality of segments by labeling, determining one of the plurality of segments as a target image based on image characteristics of the target bone, and measuring a length of the target bone based on the target image. 1. An image processing apparatus comprising:a data acquisition device for acquiring image data of a subject including a target bone; anda data processor for acquiring binary image data by performing thresholding based on the image data, segmenting the binary image data into a plurality of segments by labeling, determining one of the plurality of segments as a target image based on image characteristics of the target bone, and measuring a length of the target bone based on the target image.2. The image processing apparatus of claim 1 , wherein the image data is volume data claim 1 , andthe data processor analyzes shapes of the plurality of segments, acquires one or more remaining segments from the plurality of segments based on the analyzed shapes, and determines one of the one or more remaining segments as the target image based on luminance values.3. The image processing apparatus of claim 2 , wherein the data processor acquires a magnitude of a first principle component claim 2 , a magnitude of a second principle component claim 2 , and a magnitude of a third principle component for each of the plurality of segments by performing principle component analysis (PCA) on each of the plurality of segments to analyze the shapes of the plurality of segments and acquires the one or more remaining segments based on the magnitudes of the first to third principle components.4. The image processing apparatus of claim 3 , wherein the data processor ...

Подробнее
22-08-2013 дата публикации

SYSTEM FOR UNIQUELY IDENTIFYING SUBJECTS FROM A TARGET POPULATION

Номер: US20130216104A1
Автор: II Paul A., Skvorc
Принадлежит:

The system for uniquely identifying subjects from a target population operates to acquire, process and analyze images to create data which contains indicia sufficient to uniquely identify an individual in a population of interest. This system implements an automated, image-based process that captures data indicative of a selected set of external characteristics for subjects that are members of a target population of a predetermined species. 1. A system configured to uniquely identify individual subjects from a group of subjects , the system comprising:electronic memory for storing a plurality of records that correspond to a plurality of individual subjects from a plurality of species, wherein the records include features of one or more characteristics of the corresponding subjects, and wherein the plurality of records comprise: a first identifier that uniquely identifies the first subject from the other subjects, and', 'features of a first external characteristic of the first subject that distinguish the first subject from other members of the first species, and, 'a first record that corresponds to a first subject, wherein the first subject is from a first species, and wherein the first record includes a second identifier that uniquely identifies the second subject from the other subjects, and', 'features of a second external characteristic of the second subject that distinguish the second subject from other members of the second species, where in the second external characteristic is different from the first external characteristic; and, 'a second record that corresponds to a second subject, wherein the second subject is from a second species that is different from the first species, and wherein the second record includesa processor for obtaining a digital image of a current subject that captures a characteristic of the current subject, determining a species of the current subject, and determining an identity of the current subject based on a comparison of features ...

Подробнее
22-08-2013 дата публикации

FINGER IDENTIFICATION APPARATUS

Номер: US20130216105A1
Принадлежит: Hitachi, Ltd.

An identification apparatus keeps the conditions for imaging uniform among successive identifications, and requires a user to perform only a series of simple maneuvers. The apparatus comprises a guide member, a light source, and an imaging unit. The guide member includes a pattern or structure for a user to position his/her finger thereon or to approach his/her specific finger region thereto. A contact member is located in the guide member where a fingertip is positioned. An optical opening is formed at a position coincident with where a finger to be imaged should be placed. The light source radiates near-infrared light through the portion of the finger to be imaged. The imaging means acquires an image of the finger, and the apparatus compares the image to previously registered images. The apparatus may also include dual light sources power saving functionality, and means for limiting the interference of external light sources. 1. A vessel pattern imaging apparatus comprising;a case having a guide member to set a finger,a light source irradiating light to the finger,an opening formed on the guide member, which is passed the light through the finger,an imaging unit imaging the light passed to the opening,wherein a part of the case has a projected form so that the upper part of the guide member is open,the light source is set at slant upper position of the projected part of the case,surface of the projected part exposed outside prevents invasion of extraneous light.2. A vessel pattern imaging apparatus comprising;a case having a place to set a fingera light source irradiating light to the finger,an opening formed on the place, which is passed the light through the finger,an imaging unit imaging the light passed the opening,wherein a part of the case has a convex form so that the upper part of the guide member is open,the light source is set at slant upper position of the convex part of the case,wherein surface of the convex part exposed outside prevents invasion of ...

Подробнее
29-08-2013 дата публикации

SYSTEMS AND METHODS FOR EVALUATING PHYSICAL PERFORMANCE

Номер: US20130223707A1
Принадлежит: MOVEMENT TRAINING SYSTEMS LLC

Systems and methods are provided for evaluating and correcting physical performance of an activity by a human. A user performing one or more physical activities may be evaluated based on criteria relating to their movement, such as strength and technique. The user's performance in relation to these criteria is then rated, and the values for the criteria are combined to provide an overall performance score. The performance score is used to determine a user's overall readiness and ability to perform the physical activity which was evaluated or an overall ability to perform physical activities. Performance scores for more than one physical activity may be combined to provide an overall performance ready score that captures the person's overall physical ability. Comparisons of performance scores over time may provide information as to whether a user is improving, and could be applied to evaluating physical rehabilitations from injuries. 1. A method of assessing performance readiness of a human , comprising:receiving at least one image of a user performing a physical activity;evaluating a technique of the user's performance and determining a technique score based on the evaluation;determining a strength score based on the user's measured strength during the physical activity;combining the technique score and the strength score to generate a performance ready score; anddisplaying the performance ready score on a display.2. The method of claim 1 , further comprising determining the technique score by calculating an angle of the human body and finding a difference between the calculated angle and a desired angle.3. The method of claim 1 , wherein the performance ready score is generated by averaging the technique score and the strength score.4. The method of claim 3 , wherein at least one of the technique score and the strength score is weighted before averaging.5. The method of claim 1 , further comprising comparing the performance ready score with a previously calculated ...

Подробнее
05-09-2013 дата публикации

IDENTIFYING COMPONENTS OF A HUMANOID FORM IN THREE-DIMENSIONAL SCENES

Номер: US20130230215A1
Принадлежит: PRIMESENSE LTD.

A method for processing data includes receiving a depth map of a scene containing a humanoid form. The depth map is processed so as to identify three-dimensional (3D) connected components in the scene, each connected component including a set of the pixels that are mutually adjacent and have mutually-adjacent depth values. Separate, first and second connected components are identified as both belonging to the humanoid form, and a representation of the humanoid form is generated including both of the first and second connected components. 113-. (canceled)14. A method for processing data , comprising:receiving a depth map of a scene containing a humanoid form, the depth map comprising a matrix of pixels, at least some of which have respective pixel depth values and correspond to respective locations in the scene;using a digital processor, processing the depth map so as to identify three-dimensional (3D) connected components in the scene, each connected component comprising a set of the pixels that are mutually adjacent and have mutually-adjacent depth values;identifying separate, first and second connected components as both belonging to the humanoid form; andgenerating a representation of the humanoid form comprising both of the first and second connected components.15. The method according to claim 14 , wherein processing the depth map comprises constructing a background model of the scene based on the depth map claim 14 , removing the background model from the depth map in order to generate a foreground map claim 14 , and identifying the 3D connected components in the foreground map.16. The method according to claim 14 , wherein processing the depth map comprises locating edges in the depth map and blocks of pixels between the edges claim 14 , and clustering adjacent blocks of the pixels in three dimensions in order to identify the 3D connected components.17. The method according to claim 14 , wherein receiving the depth map comprises receiving a temporal sequence ...

Подробнее
19-09-2013 дата публикации

GUIDANCE SYSTEM, DETECTION DEVICE, AND POSITION ASSESSMENT DEVICE

Номер: US20130242074A1
Принадлежит: NIKON CORPORATION

To appropriately guide a subject person while protecting a privacy of the subject person despite use of an captured image of the subject person, a guidance system includes: an image capturing unit capable of capturing an image containing a subject person from a first direction; a detection unit that detects a size of an image of the subject person from the image captured by the image capturing unit; and a guidance unit that provides guidance for the subject person based on a detection result of the detection unit. 1. A guidance system comprising:an image capture capable of capturing an image containing a subject person from a first direction;a first detector that detects a size of an image of the subject person from the image captured by mage capture; anda guidance that guides the subject person based on a detection result of the first detector.2. The guidance system according to claim 1 , whereinthe first detector detects positional info information of the subject person from the image captured by the image capture, andthe guidance system further comprises a first determiner that determines whether the subject person moves according to the guidance based on the positional information of the subject person.3. The guidance system according to claim 1 , whereinthe first detector detects positional information of the subject person from the image captured by the image capture;the guidance calculates a distance from a reference position based on the positional information of the subject person detected by the first detector, and guides the subject person based on the distance.4. The guidance system according to claim 1 , wherein the first detector detects a size of an image corresponding to a head of the subject person.5. The guidance system according to claim 4 , whereinthe guidance holds a height of the subject person, a size of the head of the subject person, and personal identification information as data, acquires postural information of the subject person based on ...

Подробнее
19-09-2013 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, PROGRAM, AND ELECTRONIC DEVICE

Номер: US20130242075A1
Принадлежит: SONY CORPORATION

An image processing device for detecting a skin region representing a skin of a subject from a pickup image obtained by imaging said subject, the image processing device includes: a first irradiating section; a second irradiating section; an image pickup section; an adjusting section; and a skin detecting section. 1. An image processing device for detecting a skin region representing a skin of a subject from a pickup image obtained by imaging the subject , the image processing device comprising:a first irradiating device configured to irradiate the subject with light of a first wavelength;a second irradiating device configured to irradiate the subject with light of a second wavelength different from the first wavelength; capture a first pickup image obtained by imaging the subject when the subject is irradiated with light of the first wavelength, and', 'capture a second pickup image obtained by imaging the subject when the subject is irradiated with light of the second wavelength;, 'an image pickup section configured toa calculating section configured to determine a difference image by determining a difference between luminance values of corresponding pixels of the first pickup image and the second pickup image;an adjusting section configured to adjust the intensity of irradiation light of the first wavelength from the first irradiating means for the first pickup image and adjust the intensity of irradiation light of the second wavelength from the second irradiating means for the second pickup image so that pixel values of a skin region in the difference image are at least greater than a skin detection enabling value; anda skin detecting section configured to detect the skin region in at least one of the first pickup image and the second pickup image on a basis of the adjusted first pickup image and the adjusted second pickup image.2. The image processing device according to claim 1 , further comprising a binarizing section configured to binarize the difference ...

Подробнее
19-09-2013 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20130243281A1
Принадлежит: SONY CORPORATION

There is provided an image processing device including a body hair detection unit that detects a body hair region corresponding to body hair from a process target image that includes skin, a texture structure estimation unit that estimates a structure of skin texture in the process target image, and an interpolation unit that interpolates the body hair region detected by the body hair detection unit based on the structure of the skin texture estimated by the texture structure estimation unit. 1. An image processing device comprising:a body hair detection unit that detects a body hair region corresponding to body hair from a process target image that includes skin;a texture structure estimation unit that estimates a structure of skin texture in the process target image; andan interpolation unit that interpolates the body hair region detected by the body hair detection unit based on the structure of the skin texture estimated by the texture structure estimation unit.2. The image processing device according to claim 1 , wherein the body hair detection unit generates color distribution information of the process target image claim 1 , decides a first threshold value for distinguishing a body hair pixel from a skin pixel based on the color distribution information claim 1 , compares each pixel value of the process target image to the first threshold value claim 1 , and performs detection of the body hair region.3. The image processing device according to claim 2 , wherein the body hair detection unit decides a color conversion coefficient that makes color separation of the body hair region from another region easy claim 2 , performs color conversion of the process target image using the color conversion coefficient claim 2 , generates the color distribution information from the process target image that has undergone color conversion claim 2 , and compares each pixel value of the process target image that has undergone the color conversion to the first threshold value.4. ...

Подробнее
19-09-2013 дата публикации

Systems and Methods for Using Curvatures to Analyze Facial and Body Features

Номер: US20130243338A1
Автор: Palmer Francis R.
Принадлежит: FRANCIS R. PALMER II MD INC.

Systems and methods of providing an attractiveness analysis are disclosed. In some embodiments, an electronic analysis platform is configured to obtain image data and curvature data to provide an attractiveness analysis to a user via a physical interface. Curvature data could comprise any data indicative of a curvature of a physical feature or a depiction thereof, including shadow data and pixilation data. 1. A system for providing an attractiveness score , comprising:a first physical user interface functionally coupled to an electronic analysis platform; and receive image data comprising a depiction of a first physical feature of a person;', 'obtain a first curvature data of the first physical feature from the image data; and', 'provide to the user, through a second user interface, an attractiveness score based at least in part on the curvature data., 'wherein the analysis platform is configured to2. The system of claim 1 , wherein the curvature data is measured using a shadow data.3. The system of claim 2 , wherein the shadow data comprises a degree of shadowing.4. The system of claim 1 , wherein the first physical feature is at least one of an eyebrow claim 1 , a pair of eyebrows claim 1 , an eye claim 1 , a pair of eyes claim 1 , a nose claim 1 , a mouth claim 1 , a lip claim 1 , a check claim 1 , a forehead claim 1 , an ear claim 1 , and a pair of ears.5. The system of claim 1 , wherein the first physical feature is at least one of a waist claim 1 , a chest claim 1 , a pair of arms claim 1 , a pair of shoulders claim 1 , a back claim 1 , a buttocks claim 1 , and a pair of legs.6. The system of claim 1 , wherein the first user interface is different from the second user interface.7. The system of claim 1 , wherein the first analysis platform is further configured to receive image data comprising a depiction of a second physical feature of the person claim 1 , and wherein the analysis platform is further configured to obtain a second curvature data of the second ...

Подробнее
19-09-2013 дата публикации

METHOD AND DEVICE FOR PEOPLE GROUP DETECTION

Номер: US20130243343A1
Принадлежит: NEC (China) Co., Ltd.

A method and a device for people group detection, relating to the field of image processing includes acquiring at least one corner and foreground region in video data; computing, according to said corner and foreground region, to obtain at least one cluster; constructing at least one region of people group according to said cluster. The device includes an acquisition module, a clustering module, and a construction module. Automatic people group detection is realized that is independent of the location detection of people and can be applied to any scene including complicated ones, and thus user demands can be better satisfied. 1. A method for people group detection , comprising:acquiring at least one corner and foreground region in video data;computing, according to said corner and foreground region, to obtain at least one cluster; andconstructing at least one region of people group according to said cluster.2. The method according to claim 1 , wherein said computing according to said corner and foreground region to obtain at least one cluster comprises:acquiring an intersection of pixels in said corner and said foreground region to obtain a corner set; andperforming a clustering operation on said corner set by using a clustering algorithm to obtain the at least one cluster.3. The method according to claim 1 , wherein constructing at least one region of people group according to said cluster comprises:constructing a region of people group for each obtained cluster by taking a cluster center as a center, and said region is a region containing the corners in said cluster.4. The method according to claim 3 , wherein the region of people group constructed for each cluster is specifically a minimal region containing all the corners in said cluster claim 3 , said corner is a pixel with local structural characteristics in an image.5. The method according to claim 1 , wherein after constructing at least one region of people group according to said cluster claim 1 , said ...

Подробнее
26-09-2013 дата публикации

PERSON DETECTION DEVICE AND PERSON DETECTION METHOD

Номер: US20130251203A1
Принадлежит: C/O PANASONIC CORPORATION

Provided is a person detection device with which it is possible to estimate a state of a part of a person from an image. A person detection device () comprises: an evaluation unit () which acquires a prescribed outline of a person from an evaluation image; and a shoulder position calculation unit () and an orientation estimation unit () which estimate a state of a prescribed part of a person which is included in the evaluation image from the prescribed outline of the person which is acquired from the evaluation image, on the basis of an estimation model which denotes a relation between the prescribed outline and the state of the prescribed part of the person. 110.-. (canceled)11. A person detection apparatus , comprising:an evaluation section that acquires an omega shape as an entire outline of a head and shoulders of a person from an evaluation image; andan estimation section that estimates, on a basis of an estimation model that shows a relationship between the predetermined outline of a person and a state of a predetermined part of the person, a state of the predetermined part of a person included in the evaluation image based on the predetermined outline acquired from the evaluation image,wherein:the omega shape is defined by a positional relationship between a plurality of points forming the omega shape; andthe plurality of points include at least a point at an end portion of each of a left shoulder and a right shoulder and a point on each of a left side and a right side of a neck.12. The person detection apparatus according to claim 11 , wherein the positional relationship between the plurality of points is represented based on a distance or an angle with respect to a reference point that is set by a user.13. The person detection apparatus according to claim 12 , further comprising:a feature generation section that acquires the omega shape of a person included in a sample image and a position of the shoulders of the person; andthe estimation model generation ...

Подробнее
24-10-2013 дата публикации

SYSTEM AND METHOD FOR VEHICLE OCCUPANCY DETECTION USING SMART ILLUMINATION

Номер: US20130278768A1
Автор: Islam Abu, Paul Peter
Принадлежит: XEROX CORPORATION

A multi-view imaging system for Vehicle Occupancy Detection (VOD) including a gantry mounted camera and illuminator to view the front seat of vehicles, and a roadside mounted camera and illuminator to view the rear seat of vehicles. The system controls the illuminator units to preserve/maximize bulb life, thus reducing the service cost of the system. In one embodiment, a target vehicle's license plate is read. If the vehicle is on a pre-approved list to use the HOV lane, then no further interrogation of the vehicle is performed. If the vehicle is not on the pre-approved list, then the front seats are interrogated by a camera and illuminator located on an overhead gantry as the vehicle continues down the highway. If the front seat analysis indicates that the passenger seat is not occupied, then the system interrogates the rear seats using a separate camera and illuminator located on the roadside. 1. A system for vehicle occupancy detection comprising:at least one vehicle identification scanner for determining the identification of a vehicle;at least one imaging unit for capturing image data used to determine whether a passenger is present in the vehicle; anda central processing unit in communication with the at least one vehicle identification scanner and the at least one imaging unit;wherein the at least one vehicle identification scanner is adapted to transmit information relating to a scanned vehicle to the central processing unit; 'compare the information received from the vehicle identification scanner to a list of pre-approved vehicles and prevent operation of the at least one imaging unit if a match is found and, if no match is found, enables the trigger of the at least one imaging unit to capture image data pertaining to passenger areas of the vehicle to be used to determine whether a passenger is present in the vehicle.', 'wherein the central processing unit is configured to2. A system as set forth in claim 1 , wherein the at least one vehicle identification ...

Подробнее
31-10-2013 дата публикации

BIOMETRIC AUTHENTICATION DEVICE, BIOMETRIC AUTHENTICATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Номер: US20130287268A1
Принадлежит:

A biometric authentication device includes: a storage unit configured to store a three-dimensional shape of a posture of a body of a user; a three-dimensional shape calculation unit configured to calculate a three-dimensional shape of a body from biometric information of the user detected by a biometric sensor; a posture calculation unit configured to calculate a posture of the body from the biometric information detected by the biometric sensor; a synthesis unit configured to synthesize a three-dimensional shape from the three-dimensional shape stored in the storage unit in accordance with the posture calculated by the posture calculation unit; and a comparison unit configured to compare the three-dimensional shape calculated by the three-dimensional shape calculation unit with the three-dimensional shape synthesized by the synthesis unit. 1. A biometric authentication device comprising:a storage unit configured to store a three-dimensional shape of a posture of a body of a user;a three-dimensional shape calculation unit configured to calculate a three-dimensional shape of a body from biometric information of the user detected by a biometric sensor;a posture calculation unit configured to calculate a posture of the body from the biometric information detected by the biometric sensor;a synthesis unit configured to synthesize a three-dimensional shape from the three-dimensional shape stored in the storage unit in accordance with the posture calculated by the posture calculation unit; anda comparison unit configured to compare the three-dimensional shape calculated by the three-dimensional shape calculation unit with the three-dimensional shape synthesized by the synthesis unit.2. The biometric authentication device as claimed in claim 1 , wherein:the storage unit is configured to store a difference between a three-dimensional structure that is a specific three-dimensional shape obtained from the three-dimensional shape of the posture and the three-dimensional shape ...

Подробнее
14-11-2013 дата публикации

APPARATUS AND METHOD FOR DETECTING BODY PARTS

Номер: US20130301911A1
Принадлежит: Samsung Electronics Co., Ltd

Provided is an apparatus and method for detecting body parts, the method including identifying a group of sub-images relevant to a body part in an image to be detected, assigning a reliability coefficient for the body part to the sub-images in the group of sub-images based on a basic vision feature of the sub-images and an extension feature of the sub-images to neighboring regions, and detecting a location of the body part by overlaying sub-images having reliability coefficients higher than a threshold value. 1. A body part detecting method comprising:identifying a group of sub-images relevant to a body part in an image to be detected;assigning, by way of a processor, a reliability coefficient for the body part to one or more of the sub-images in the group of sub-images based on a basic vision feature of the sub-images and an extension feature of the sub-images to neighboring regions; anddetecting a location of the body part by overlaying sub-images having reliability coefficients higher than a threshold value.2. The method of claim 1 , wherein the assigning of the reliability coefficient for the body part to the sub-images belonging to the group of sub-images comprises:training a sample image to acquire a multi-part context descriptor for the body part;defining a multi-part context descriptor for the sub-images, the multi-part context descriptor including a basic descriptor corresponding to the basic vision feature and an extension descriptor corresponding to the extension feature; andassigning the reliability coefficient to the sub-images based on a similarity between the multi-part context descriptor for the sub-images and the trained multi-part context descriptor for the body part.3. The method of claim 2 , wherein the basic descriptor describes the basic vision feature of the body part in the sub-images and the extension descriptor describes a spatial structural relationship between the body part in the sub-images and the neighboring regions of the body part.4. ...

Подробнее
14-11-2013 дата публикации

IMAGE PROCESSING DEVICE, DISPLAY CONTROL METHOD AND PROGRAM

Номер: US20130301925A1
Принадлежит: SONY CORPORATION

There is provided an image processing device including an input image acquisition portion that acquires an input image, a past image acquisition portion that acquires a past image of a photographic subject in the input image, a mode selection portion that selects one of modes, using the input image, from among a plurality of modes including a first mode in which the photographic subject in the past image is overlapped with the photographic subject in the input image and a second mode in which the photographic subject in the past image is arranged side by side with the photographic subject in the input image, and a display control portion that superimposes the past image on the input image in accordance with the mode selected by the mode selection portion. 1. An image processing device comprising:an input image acquisition portion that acquires an input image;a past image acquisition portion that acquires a past image of a photographic subject in the input image;a mode selection portion that selects one of modes, using the input image, from among a plurality of modes including a first mode in which the photographic subject in the past image is overlapped with the photographic subject in the input image and a second mode in which the photographic subject in the past image is arranged side by side with the photographic subject in the input image; anda display control portion that superimposes the past image on the input image in accordance with the mode selected by the mode selection portion.2. The image processing device according to claim 1 ,wherein the mode selection portion selects one of the modes in response to a motion of the photographic subject in the input image.3. The image processing device according to claim 2 ,wherein the mode selection portion selects the first mode when a motion is detected in which the photographic subject moves closer to the past image that is being displayed in accordance with a mode different from the first mode.4. The image ...

Подробнее
28-11-2013 дата публикации

System And Process For Detecting, Tracking And Counting Human Objects of Interest

Номер: US20130314505A1
Принадлежит: ShopperTrak RCT LLC

A system is disclosed that includes: at least one image capturing device at the entrance to obtain images; a reader device; and a processor for extracting objects of interest from the images and generating tracks for each object of interest, and for matching objects of interest with objects associated with RFID tags, and for counting the number of objects of interest associated with, and not associated with, particular RFID tags.

Подробнее
12-12-2013 дата публикации

APPARATUS AND METHODS FOR MASKING A PORTION OF A MOVING IMAGE STREAM

Номер: US20130329030A1
Принадлежит: SYNC-RX, LTD.

Apparatus and methods are described for imaging a portion of a body of a subject that undergoes a motion cycle, including acquiring a plurality of image frames of the portion of the subject's body. A given feature is identified in at least some of the image frames. At least some image frames are image tracked with respect to the feature, and the image frames that have been image tracked with respect to the given feature are displayed as a stream of image frames. Visibility of a periphery of the displayed stream of image frames is at least partially reduced, by applying a mask to the displayed stream of image frames. Other applications are also described. 116-. (canceled)17. A method for imaging a portion of a body of a subject that undergoes a motion cycle , the method comprising:acquiring a plurality of image frames of the portion of the subject's body;identifying a given feature in at least some of the image frames;image tracking the at least some image frames with respect to the feature, by aligning the at least some image frames with respect to the feature;displaying, as a stream of image frames, the image frames that have been image tracked with respect to the first given feature; andat least partially reducing visibility of a periphery of the displayed stream of image frames.18. The method according to claim 17 , wherein identifying the feature comprises identifying an anatomical feature of the subject.19. The method according to claim 17 , wherein reducing the visibility of the periphery of the displayed stream of image frames comprises applying a mask to the displayed stream of image frames.20. The method according to claim 17 , wherein displaying the stream of image frames comprises skipping at least one image frame in which the given feature was not identified.21. The method according to claim 20 , wherein displaying the stream of image frames comprises blending into each other image frames that are adjacent to the skipped image frame.22. The method ...

Подробнее
12-12-2013 дата публикации

PERSON TRACKING DEVICE, PERSON TRACKING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PERSON TRACKING PROGRAM

Номер: US20130329958A1
Принадлежит: NEC Corporation

A person region information extraction unit () detects a person region where a person appearing in a video belongs, and generates person region information describing information of the person region. An accompanying person determination unit () identifies at least one accompanying person accompanying a tracking target person among persons included in the person region information based on the person region information and information specifying a tracking target person, and generates accompanying person information describing the accompanying person. A distinctive person selection unit () selects a distinctive person having a salient feature using the person region information among the accompanying person specified by the accompanying person information, and generates distinctive person information describing the distinctive person. A person tracking unit () calculates a tracking result for the distinctive person based on the person region information and the distinctive person information. 1. A person tracking device comprising:a person region information extraction unit that detects a person region where a person appearing in a video belongs, and generates person region information describing information of the person region;an accompanying person determination unit that identifies at least one accompanying person accompanying a tracking target person among persons included in the person region information based on the person region information and information specifying a tracking target person, and generates accompanying person information describing the accompanying person;a distinctive person selection unit that selects a distinctive person having a salient feature using the person region information among the accompanying person specified by the accompanying person information, and generates distinctive person information describing the distinctive person; anda person tracking unit that calculates a distinctive person tracking result being a tracking result ...

Подробнее
12-12-2013 дата публикации

OBJECT DETECTION DEVICE

Номер: US20130329959A1
Автор: NOSAKA Kenichiro
Принадлежит: Panasonic Corporation

An object detection device includes an acquisition unit configured to acquire information indicating a temperature distribution, a storage unit configured to store background information indicating a temperature distribution when no target object exists, a detection unit configured to detect existence or absence of a target object, and an update unit configured to repeatedly update the background information. The update unit performs, with respect to a non-detection region, a first background updating process for the update of the background information based on the acquired information and performs, with respect to a detection region, a second background updating process for the update of the background information using a correction value. 1. An object detection device , comprising:an acquisition unit configured to acquire information indicating a temperature distribution within a detection range;a storage unit configured to store background information indicating a temperature distribution within the detection range when no target object exists in the detection range;a detection unit configured to detect existence or absence of a target object in the detection range, based on a change of the acquired information with respect to the background information; andan update unit configured to repeatedly update the background information stored in the storage unit,wherein the update unit divides the background information into a detection region including a region where the target object is detected by the detection unit and a non-detection region including a region other than the detection region, andthe update unit performs, with respect to the non-detection region, a first background updating process for the update of the background information based on the acquired information and performs, with respect to the detection region, a second background updating process for the update of the background information using a correction value found from a variation of a ...

Подробнее
12-12-2013 дата публикации

IMAGE-PROCESSING DEVICE AND IMAGE-PROCESSING PROGRAM

Номер: US20130329964A1
Автор: Nishi Takeshi
Принадлежит: NIKON CORPORATION

There are provided a face detection unit that detects a face of an animal in an image; a candidate area setting unit that sets an animal body candidate area for a body of the animal in the image based upon face detection results provided by the face detection unit; a reference image acquisition unit that obtains a reference image; a similarity calculation unit that divides the animal body candidate area having been set by the candidate area setting unit into a plurality of small areas and calculates a level of similarity between an image in each of the plurality of small areas and the reference image; and a body area estimating unit that estimates an animal body area corresponding to the body of the animal from the animal body candidate area based upon levels of similarity having been calculated for the plurality of small areas by the similarity calculation unit. 1. An image-processing device , comprising:a face detection unit that detects a face of an animal in an image;a candidate area setting unit that sets an animal body candidate area for a body of the animal in the image based upon face detection results provided by the face detection unit;a reference image acquisition unit that obtains a reference image;a similarity calculation unit that divides the animal body candidate area having been set by the candidate area setting unit into a plurality of small areas and calculates a level of similarity between an image in each of the plurality of small areas and the reference image; anda body area estimating unit that estimates an animal body area corresponding to the body of the animal from the animal body candidate area based upon levels of similarity having been calculated for the plurality of small areas by the similarity calculation unit.2. An image-processing device according to claim 1 , wherein:the candidate area setting unit sets the animal body candidate area in the image in correspondence to a size and a tilt of the face of the animal having been detected ...

Подробнее
12-12-2013 дата публикации

MEDIA PREFERENCES

Номер: US20130329966A1
Автор: HILDRETH Evan
Принадлежит: QUALCOMM INCORPORATED

An electronic media device may be controlled based on personalized media preferences of users experiencing content using the electronic media device. Users experiencing content using the electronic media device may be automatically identified and the electronic media device may be automatically controlled based on media preferences associated with the identified users. 1. A computer-implemented method comprising:determining an identity of a user detected within an image of an area proximate to an electronic media device;accessing personalized media settings associated with the user based on the determined identity; andcontrolling the electronic media device based on the accessed personalized media settings.228-. (canceled) This application claims the benefit of U.S. Provisional Patent Application No. 60/989,787, filed Nov. 21, 2007, and U.S. Provisional Patent Application No. 61/080,475, filed Jul. 14, 2008, each of which is incorporated herein by reference in its entirety.The present disclosure generally relates to controlling electronic devices based on media preferences.An electronic device may permit a user to change settings used to control the electronic device. Changing settings may allow the user to personalize the user's experience using the electronic device. However, changing settings and personalizing the user's experience may be limited and may be difficult for the user to control.According to a general implementation, a method includes determining an identity of a user detected within an image of an area proximate to an electronic media device. The method also includes accessing personalized media settings associated with the user based on the determined identity, and controlling the electronic media device based on the accessed personalized media settings.Implementations may include one or more of the following features. For example, the method also may include receiving, at the electronic media device, a user input command, and accessing one or more ...

Подробнее
19-12-2013 дата публикации

INFORMATION PROCESSING APPARATUS AND RECORDING MEDIUM

Номер: US20130336544A1
Принадлежит:

There is provided an information processing apparatus including a first managing unit that manages specific person information, the specific person information being information regarding a specific person, a second managing unit that manages attention object information, the attention object information being information regarding an attention object candidate, a searching unit that searches for the specific person information, using the attention object information as a search key, an evaluating unit that evaluates an attention object, on the basis of a search result obtained by the searching unit, and a storage control unit that stores the specific person, the attention object, and an evaluation value obtained by the evaluating unit in association with each other. 1. An information processing apparatus comprising:a first managing unit that manages specific person information, the specific person information being information regarding a specific person;a second managing unit that manages attention object information, the attention object information being information regarding an attention object candidate;a searching unit that searches for the specific person information, using the attention object information as a search key;an evaluating unit that evaluates an attention object, on the basis of a search result obtained by the searching unit; anda storage control unit that stores the specific person, the attention object, and an evaluation value obtained by the evaluating unit in association with each other.2. The information processing apparatus according to claim 1 ,wherein the evaluating unit performs weighting according to the specific person information to evaluate the attention object.3. The information processing apparatus according to claim 1 ,wherein the evaluating unit evaluates the attention object, on the basis of how many times the attention object information is searched for in the specific person information by the searching unit.4. The ...

Подробнее
09-01-2014 дата публикации

EXTRACTION OF SKELETONS FROM 3D MAPS

Номер: US20140010425A1
Автор: Gurman Amiad
Принадлежит: PRIMESENSE LTD.

A method for processing data includes receiving a temporal sequence of depth maps of a scene containing a humanoid form having a head. The depth maps include a matrix of pixels having respective pixel depth values. A digital processor processes at least one of the depth maps so as to find a location of the head and estimates dimensions of the humanoid form based on the location. The processor tracks movements of the humanoid form over the sequence using the estimated dimensions. 1. A method for processing data , comprising:receiving a temporal sequence of depth maps of a scene containing a humanoid form, the depth maps comprising a matrix of pixels having respective pixel depth values;using a digital processor, processing at least one of the depth maps so as to find three-dimensional (3D) medial axes of limbs of the humanoid form;identifying joints in the limbs based on the 3D medial axes; andtracking movements of the humanoid form over the sequence using the medial axes and the joints of the limbs.2. The method according to claim 1 , wherein identifying the joints comprises finding intersection points of the 3D medial axes of adjoining limb segments.3. The method according to claim 2 , wherein finding the intersection points comprises locating an elbow at an intersection of the 3D medial axes of an upper arm and a forearm of the humanoid form.4. The method according to claim 1 , wherein processing the at least one of the depth maps comprises identifying claim 1 , using the identified joints and limbs claim 1 , left and right arms of the humanoid form claim 1 , and searching to find a head of the humanoid form between the arms.5. The method according to claim 4 , wherein identifying the left and right arms comprises capturing the at least of the depth maps while the humanoid form stands in a calibration pose claim 4 , in which the left and right arms are raised.6. The method according to claim 5 , wherein the left and right arms are raised above a shoulder level of ...

Подробнее
16-01-2014 дата публикации

BIOLOGICAL INFORMATION OBTAINING APPARATUS AND BIOLOGICAL INFORMATION COLLATING APPARATUS

Номер: US20140016834A1
Принадлежит: FUJITSU LIMITED

An obtaining unit of a biological information obtaining apparatus obtains biological information for authentication, and an extracting unit extracts first feature information from biological information for authentication. A generating unit of a biological information collating apparatus generates encrypted position correction information, and encrypted position correction information is transmitted to the biological information obtaining apparatus. A correcting unit decrypts encrypted position correction information to obtain position correction information, and by correcting first feature information, performs alignment between first feature information and second feature information extracted from biological information for registration. A transforming unit transforms corrected first feature information, and transmits transformed first feature information to the biological information collating apparatus. A collating unit collates transformed first feature information and transformed second feature information stored in a storing unit, and transmits a collation result to biological information obtaining apparatus. 1. A non-transitory computer-readable recording medium having stored therein a program for causing a computer to execute a process comprising:obtaining biological information for authentication;extracting first feature information from the biological information for authentication;receiving encrypted position correction information and obtaining position correction information by decrypting the position correction information;performing position alignment between the first feature information and second feature information extracted from biological information for registration by correcting the first feature information based on obtained position correction information;transforming corrected first feature information;transmitting transformed first feature information to a biological information collating apparatus that stores transformed second feature ...

Подробнее
23-01-2014 дата публикации

METHOD OF CONTROLLING A FUNCTION OF A DEVICE AND SYSTEM FOR DETECTING THE PRESENCE OF A LIVING BEING

Номер: US20140023235A1
Принадлежит: KONINKLIJKE PHILIPS N.V.

A method of controlling a function of a device, includes obtaining a sequence () of digital images taken at consecutive points in time. At least one measurement zone () including a plurality of image points is selected. For at least one measurement zone (), a signal () representative of at least variations in a time-varying value of a combination of pixel values at at least a number of the image points is obtained and at least one characteristic of the signal () within at least a range of interest of its spectrum relative to comparison data is determined. The determination comprises at least one of: 1. Method of controlling a function of a device , including:receiving a sequence of digital images taken at successive points in time;selecting at least one measurement zone including a plurality of image points;for at least one measurement zone, obtaining a signal representative of at least variations in a time-varying value of a combination of pixel values at at least a number of the image points, and determining at least one characteristic of the signal within at least a range of interest of its spectrum relative to comparison data, the determination comprising at least one of (i) determining whether the signal has a spectrum with a local maximum at a frequency matching a comparison frequency to a certain accuracy and (ii) determining whether at least a certain frequency component of the signal is in phase with a comparison signal to a certain accuracy; andcontrolling the function in dependence on the determination.2. Method according to claim 1 , wherein the comparison data are based on at least one signal representative of at least variations in a time-varying value of a combination of pixel values of at least a number of image points of a further selected measurement zone.3. Method according to claim 2 , wherein each further selected measurement zone is one of a number of measurement zones in a grid laid over the images.4. Method according to claim 2 , wherein the ...

Подробнее
30-01-2014 дата публикации

METHOD FOR LOCATING ANIMAL TEATS

Номер: US20140029797A1
Автор: Eriksson Andreas
Принадлежит: DELAVAL HOLDING AB

A method and apparatus for locating teats of an animal uses an automated three-dimensional image capturing device and includes automatically obtaining and storing a three-dimensional numerical image of the animal that includes a teat region of the animal; making the image available for review by an operator; receiving manually input data designating a location of the teats in the image; from the designated location of the teats, creating a teat position data file containing the location co-ordinates of each defined teat from within the image; updating an animal data folder with the teat position data file. The method references the teat position data file containing the location co-ordinates of each defined teat during an animal related operation involving connecting a milking or cleaning apparatus to the teats of an animal. 115-. (canceled)16. A method for locating teats of an animal , comprising the steps of:{'b': 10', '30, 'using an automated three-dimensional numeric image capturing device () and, under control of a control device () with a data processing unit, automatically making a three-dimensional image of a teat region of the animal, the teat region including or expected to include the teats of the animal;'}under control of the control device, displaying the image on a display with a graphical user interface (GUI) for review of the image by an operator;receiving, from the operator, manually input data designating a location of an identified teat in the image displayed on the display;from the manually input data designating the location of the identified teat, creating a teat position data file containing location co-ordinates of the identified teat in the image displayed on the display; andupdating an animal data folder, of the animal, with the teat position data file.17. The method of claim 16 , wherein claim 16 ,said step of making the three-dimensional image of the teat region of the animal includes storing the image in a memory of the control device.18 ...

Подробнее
30-01-2014 дата публикации

Body Condition Score Determination for an Animal

Номер: US20140029808A1
Автор: Lee Ken
Принадлежит: CLICRWEIGHT, LLC

Described are methods and systems for determining a body condition score (BCS) for an animal. An imaging device captures at least one image of an animal, where each image includes a body region of the animal, and transmits the image to a computing device. The computing device identifies the body region contained in the image and crops a portion of the image associated with the identified body region from the image. The computing device compares the cropped portion of the image with one or more fitting models corresponding to the body region and determines a body condition score for the body region based upon the comparing step. 1. A method for determining a body condition score (BCS) for an animal , the method comprising:capturing, by an imaging device, at least one image of an animal, wherein each image includes a body region of the animal, and transmitting the image to a computing device;identifying, by a computing device, the body region contained in the image;cropping, by the computing device, a portion of the image associated with the identified body region from the image;comparing, by the computing device, the cropped portion of the image with one or more fitting models corresponding to the body region; anddetermining, by the computing device, a body condition score for the body region based upon the comparing step.2. The method of claim 1 , further comprising validating claim 1 , by the computing device claim 1 , a posture of the animal in the image.3. The method of claim 2 , the validating step further comprising:generating a 3D point cloud based upon the image;performing one or more edge analysis tests on the 3D point cloud; anddetermining whether the posture is valid based upon the one or more edge analysis tests.4. The method of claim 3 , wherein performing one or more edge analysis tests includes:generating a cubic polynomial curve based upon the topmost points of the 3D point cloud; andanalyzing the inflection point and concavity of the cubic polynomial ...

Подробнее
06-02-2014 дата публикации

INFORMATION PROCESSING DEVICE

Номер: US20140037150A1
Принадлежит: NEC Corporation

An information processing device of the present invention includes: a recognition result acquiring means for acquiring respective recognition result information outputted by a plurality of recognition engines and executing different recognition processes on recognition target data; and an integration recognition result outputting means for outputting a new recognition result obtained by integrating the respective recognition result information acquired from the plurality of recognition engines. The recognition result acquiring means is configured to acquire the respective recognition result information in a data format common to the plurality of recognition engines, from the plurality of recognition engines. The integration recognition result outputting means is configured to integrate the respective recognition result information based on the respective recognition result information, and output as the new recognition result. 1. An information processing device comprising:a recognition result acquiring unit for acquiring respective recognition result information outputted by a plurality of recognition engines executing different recognition processes on recognition target data; andan integration recognition result outputting unit for outputting a new recognition result obtained by integrating the respective recognition result information acquired from the plurality of recognition engines, wherein:the recognition result acquiring unit is configured to acquire the respective recognition result information in a data format common to the plurality of recognition engines, from the plurality of recognition engines; andthe integration recognition result outputting unit is configured to integrate the respective recognition result information based on the respective recognition result information, and output as the new recognition result.2. The information processing device according to claim 1 , wherein the integration recognition result outputting unit is configured to ...

Подробнее
06-02-2014 дата публикации

LEARNING-BASED POSE ESTIMATION FROM DEPTH MAPS

Номер: US20140037191A1
Автор: Litvak Shai
Принадлежит: PRIMESENSE LTD.

A method for processing data includes receiving a depth map of a scene containing a humanoid form. Respective descriptors are extracted from the depth map based on the depth values in a plurality of patches distributed in respective positions over the humanoid form. The extracted descriptors are matched to previously-stored descriptors in a database. A pose of the humanoid form is estimated based on stored information associated with the matched descriptors. 1. A method for processing data , comprising:receiving a depth map of a scene containing a humanoid form, the depth map comprising a matrix of pixels having respective pixel depth values;extracting from the depth map respective descriptors based on the depth values in a plurality of patches distributed in respective positions over the humanoid form;matching the extracted descriptors to previously-stored descriptors in a database; andestimating a pose of the humanoid form based on stored information associated with the matched descriptors.2. The method according to claim 1 , wherein extracting the respective descriptors comprises dividing each patch into an array of spatial bins claim 1 , and computing a vector of descriptor values corresponding to the pixel depth values in each of the spatial bins.3. The method according to claim 2 , wherein each patch has a center point claim 2 , and wherein the spatial bins that are adjacent to the center point have smaller respective areas than the spatial bins at a periphery of the patch.4. The method according to claim 2 , wherein each patch has a center point claim 2 , and wherein the spatial bins are arranged radially around the center point.5. The method according to claim 2 , wherein the descriptor values are indicative of a distribution of at least one type of depth feature in each bin claim 2 , selected from the group of depth features consisting of depth edges and depth ridges.6. The method according to claim 1 , wherein matching the extracted descriptors comprises ...

Подробнее
13-02-2014 дата публикации

HUMAN TRACKING SYSTEM

Номер: US20140044309A1
Принадлежит: MICROSOFT CORPORATION

An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. 1. A computer-implemented method for tracking a physical target the method comprising:receiving a depth image that captures the physical target in a physical space;determining a first point that corresponds to a point where an extremity of the physical target meets a torso of the physical target;determining a second point that corresponds to a joint of the extremity of the physical target;determining a direction to search based on a line formed between the first point and the second point; anddetermining a third point of the extremity based on the direction to search and depth values of the depth image.2. The method of claim 1 , wherein the first point comprises a shoulder of the physical target claim 1 , wherein the second point comprises an elbow of the physical target claim 1 , and wherein the third point comprises a hand of the physical target.3. The method of claim 1 , wherein the first point comprises a hip of the physical target claim 1 , wherein the second point comprises an knee of the physical target claim 1 , and wherein the third point comprises a foot of the physical target.4. The method of claim 1 , wherein the direction to search corresponds to moving along the line away from the torso of the physical target.5. The method of claim 1 , wherein determining the third point of the extremity based on the direction to search and depth values of the depth image comprises:determining the third point of the extremity ...

Подробнее
20-03-2014 дата публикации

METHOD FOR DETERMINING AXIAL DIRECTION OF BORE OF BONE FIXATOR

Номер: US20140079301A1
Принадлежит: National Central University

A method for determining an axial direction of a bore of a bone fixator includes the following steps: obtaining X-ray images of the bore, calculating perpendicular bisectors, determining spatial planes, and obtaining the axial direction of the bore. After X-ray images of the bore are taken from two X-ray source positions, two overlapping images of the bore are obtained for calculating two perpendicular bisectors therein respectively. Each perpendicular bisector and its corresponding X-ray source position define one spatial plane. The intersection of the two spatial planes defines the axial direction of the bore. Now that the axial direction of the bore can be derived from only two X-ray images of the bore taken during an orthopedic surgery, radiation exposure of the patient and of the medical personnel involved can be significantly reduced. 1. A method for determining an axial direction of a bore of a bone fixator , executable in a computer system , the method comprising the steps of:obtaining images of the bore, wherein images of the bore are taken from a first position to obtain a first overlapping image of a first bore surface and a second bore surface of the bore, and from a second position to obtain a second overlapping image of the first bore surface and the second bore surface;calculating perpendicular bisectors, wherein the first overlapping image and the second overlapping image are separately processed by a processing unit so as to obtain through calculation a first perpendicular bisector in the first overlapping image and a second perpendicular bisector in the second overlapping image;determining spatial planes, wherein the first position and the first perpendicular bisector define a first plane, and the second position and the second perpendicular bisector define a second plane; andobtaining the axial direction of the bore, wherein the processing unit calculates an intersection of the first plane and the second plane, and the intersection defines the ...

Подробнее
27-03-2014 дата публикации

Method and System for Optimizing Accuracy-Specificity Trade-offs in Large Scale Visual Recognition

Номер: US20140086497A1

As visual recognition scales up to ever larger numbers of categories, maintaining high accuracy is increasingly difficult. Embodiment of the present invention include methods for optimizing accuracy-specificity trade-offs in large scale recognition where object categories form a semantic hierarchy consisting of many levels of abstraction. 1. A computerized method for classifying images , comprising:receiving an image hierarchy;receiving a first image of interest wherein the image includes at least one feature;classifying at least one feature of the image according to the image hierarchy;generating a measure of uncertainty associated with the classification of the at least one feature;optimizing the classification of the at least one feature of the image using the measure of uncertainty; andgenerating at least one optimized classification of the at least one feature of the image for the first image of interest.2. The method of claim 1 , wherein optimizing the classification is performed by trading off specificity in the classification for accuracy of the classification.3. The method of claim 2 , wherein the measure of uncertainty exceeds a predetermined level.4. The method of claim 1 , wherein the optimization is performed by implementing a reward algorithm for preferred classifications.5. The method of claim 4 , wherein the reward is implemented to produce classifications with increased specificity.6. The method of claim 4 , wherein the reward includes a measure of information gain.7. The method of claim 4 , wherein the reward includes a measure of decrease in uncertainty.8. The method of claim 4 , wherein the reward algorithm allocates predetermined values in the image hierarchy.9. The method of claim 1 , wherein generating at least one optimized classification of the at least one feature of the image for the first image of interest includes generating at least two optimized classifications for the image of interest.10. The method of claim 1 , wherein optimizing ...

Подробнее
03-04-2014 дата публикации

Systems and methods for monitoring vehicle occupants

Номер: US20140093133A1
Принадлежит: Flir Systems Inc

Various techniques are disclosed for systems and methods using small form factor infrared imaging modules to monitor occupants in an interior compartment of a vehicle. For example, a vehicle-mounted system may include one or more infrared imaging modules, a processor, a memory, alarm sirens, and a communication module. The vehicle-mounted system may be mounted on, installed in, or otherwise integrated into a vehicle that has an interior compartment. The infrared imaging modules may be configured to capture thermal images of desired portions of the interior compartments. Various thermal image processing and analytics may be performed on the captured thermal images to determine the presence and various attributes of one or more occupants. Based on the determination of the presence and various attributes, occupant detection information and/or control signals may be generated. Occupant detection information may be used to perform various monitoring operations, and control signals may adjust various vehicle components.

Подробнее
10-04-2014 дата публикации

OCCUPANT SENSING DEVICE

Номер: US20140098232A1
Автор: Koike Toshihiro
Принадлежит: HONDA MOTOR CO., LTD.

An occupant sensing device that accurately detects the state of an occupant regardless of the influence of extraneous noise and outside light and the influence of a defect in detected data when lighting of auxiliary light is delayed. When a specific part detection possibility/impossibility determination unit determines that the detection of the position of a specific part, such as a head, in the latest image is impossible, on the basis of past data stored as data corresponding to images outputted from a camera earlier than the latest image in a storage unit, the position of the head is predicted and detected. 1. An occupant detecting device comprising:an image capturing unit disposed in a cabin of a vehicle for capturing images of a given area including a seat in the cabin continuously or intermittently at predetermined time intervals and successively outputting the captured images;a position detecting unit for detecting a position of a particular body region of an occupant seated on the seat in the images output from the image capturing unit;an occupant state detecting unit for detecting a state of the occupant based on the position of the particular body rection, which is detected by the position detecting unit;a memory unit for successively storing data depending on the images that are successively output during a predetermined period from the image capturing unit; anda detectability determining unit for judging whether or not the position detecting unit is capable of detecting the position of the particular body region,wherein if the detectability determining unit judges that the position detecting unit is capable of detecting the position of the particular body region in a latest image as a presently output image of the images successively output from the image capturing unit, the position detecting unit detects the position of the particular body region based on the latest image, and if the detectability determining unit judges that the position detecting unit ...

Подробнее
04-01-2018 дата публикации

Biological information detection device using second light from target onto which dots formed by first light are projected

Номер: US20180000359A1
Автор: Hisashi Watanabe

A biological information detection device includes a light source, an image capturing device, and one or more arithmetic circuits. The light source projects dots formed by light onto a target including a living body. The image capturing device includes photodetector cells and generates an image signal representing an image of the target onto which the dots are projected. The one or more arithmetic circuits detect a portion corresponding to at least a part of the living body in the image by using the image signal and calculate biological information of the living body by using image signal of the portion.

Подробнее
07-01-2021 дата публикации

Information processing apparatus, program, and information processing system

Номер: US20210001808A1
Принадлежит: KONICA MINOLTA INC

There is provided an information processing apparatus installed in a mobile object, the information processing apparatus including: a hardware processor that detects entry/exit of a user into/from the mobile object; and controls a security level in the mobile object, in accordance with entry/exit of a user into/from the mobile object.

Подробнее
02-01-2020 дата публикации

CONTROL METHOD AND CONTROL DEVICE FOR IVI

Номер: US20200001812A1
Автор: CHO Changwoo
Принадлежит:

Disclosed are a control method and a control apparatus of vehicle infotainment, which use a camera monitoring a vehicle's interior, and a display providing a GUI of infotainment apparatuses. A processor obtains a monitoring image through the camera, displays a first GUI on the display in response to a first object motion detected in a first interest region which has been set beforehand in the monitoring image, and display a second GUI on the display in response to a second object motion detected in a second interest region which has been set beforehand in the monitoring image and which is separated from the first interest region. One or more of an autonomous vehicle, a user terminal and a server of the present invention can be associated with artificial intelligence modules, drones (unmanned aerial vehicles (UAVs)), robots, augmented reality (AR) devices, virtual reality (VR) devices, devices related to G service, etc. 1. A control method of vehicle infotainment using a camera which monitors a vehicle's interior , and a display providing a GUI of infotainment apparatuses , the control method comprising:obtaining a monitoring image through the camera;by a processor, displaying a first GUI on the display in response to a first object motion detected in a first interest region which has been set beforehand in the monitoring image; andby a processor, displaying a second GUI on the display in response to a second object motion detected in a second interest region which has been set beforehand in the monitoring image and which is separated from the first interest region.2. The control method of claim 1 , wherein the obtaining of the monitoring image is taking a picture of occupants of at least two seats.3. The control method of claim 1 , further comprising:extracting an occupant object from the monitoring image; andchecking the occupant information matching with the occupant object.4. The control method of claim 3 , wherein the displaying of the first GUI is performed ...

Подробнее
02-01-2020 дата публикации

Autonomous driving method and apparatus

Номер: US20200001874A1
Автор: Kun TANG

The present disclosure provides an autonomous driving method and an apparatus. The method includes: receiving a currently collected image transmitted by a unmanned vehicle, where the currently collected image is an image collected in a target scenario; acquiring current driving data according to the currently collected image and a pre-trained autonomous driving model, where the autonomous driving model is used to indicate a relationship between an image and driving data in at least two scenarios, and the at least two scenarios include the target scenario; and sending the current driving data to the unmanned vehicle. Robustness of the unmanned driving method is improved.

Подробнее
06-01-2022 дата публикации

REMOTE STATE FOLLOWING DEVICES

Номер: US20220004743A1
Принадлежит:

A system and method for a remote state following device that includes an electronic device with a controllable operating state; an imaging device; and control system that when targeted at a control interface interprets a visual state from the control interface, and modifies the operating state in coordination with the visual state. 1. A system comprising:an electronic device with a controllable operating state;an imaging device with an imaging resolution that isolates visual state of a targeted control interface in a field of view of the imaging device; interpret a visual state of the control interface, and', 'modulate the operating state in coordination with the visual state., 'a control system communicatively coupled to the imaging device and the electronic device, the control system comprising a visually monitored interface control mode with configuration to2. The system of claim 1 , wherein the imaging device is a camera claim 1 , and wherein the control system is further configured to:detect a control interface in the field of view, andwherein configuration to modulate the operating state in coordination with the visual state is restricted to modulate the operating state during detection of the control interface.3. The system of claim 2 , wherein there are at least two types of detectable control interfaces claim 2 , and wherein a first type of control interface has a first set of visual states that are mapped to two distinct operating states and a second type of control interface that has a second set of visual states that are mapped to a range of operating states.4. The system of claim 2 , wherein the imaging device captures image data of an environment claim 2 , wherein the control system comprises configuration to configure a sub-region of image data as a location of the control interface.5. The system of claim 4 , wherein the sub-region of the image data is automatically configured as the location of the control interface.6. The system of claim 4 , wherein ...

Подробнее
06-01-2022 дата публикации

MULTI-TARGET PEDESTRIAN TRACKING METHOD, MULTI-TARGET PEDESTRIAN TRACKING APPARATUS AND MULTI-TARGET PEDESTRIAN TRACKING DEVICE

Номер: US20220004747A1
Автор: YANG Jinglin
Принадлежит: BOE Technology Group Co., Ltd.

A multi-target pedestrian tracking method, a multi-target pedestrian tracking apparatus and a multi-target pedestrian tracking device are provided, related to the field of image processing technologies. The multi-target pedestrian tracking method includes: detecting a plurality of candidate pedestrian detection boxes in a current frame of image to be detected, where a temporary tracking identification and a tracking counter are set for each of the plurality of candidate pedestrian detection boxes; and determining whether each of the plurality of candidate pedestrian detection boxes matches an existing tracking box, updating a value of the tracking counter according to a determination result, and continuing to detect a next frame of image to be detected. When the value of the tracking counter reaches a first preset threshold, the updating the value of the tracking counter is stopped, and the temporary tracking identification is converted to a confirmed tracking identification. 1. A multi-target pedestrian tracking method , comprising:detecting a plurality of candidate pedestrian detection boxes in a current frame of image to be detected, wherein a temporary tracking identification and a tracking counter are set for each of the plurality of candidate pedestrian detection boxes; anddetermining whether each of the plurality of candidate pedestrian detection boxes matches an existing tracking box, updating a value of the tracking counter according to a determination result, and continuing to detect a next frame of image to be detected, wherein in a case that the value of the tracking counter reaches a first preset threshold, the updating the value of the tracking counter is stopped, and the temporary tracking identification is converted to a confirmed tracking identification.2. The multi-target pedestrian tracking method according to claim 1 , wherein the updating the value of the tracking counter according to the determination result comprises:in a case that the ...

Подробнее
06-01-2022 дата публикации

DIGITAL IMAGING SYSTEMS AND METHODS OF ANALYZING PIXEL DATA OF AN IMAGE OF A USER'S BODY FOR DETERMINING A HAIR DENSITY VALUE OF A USER'S HAIR

Номер: US20220005177A1
Принадлежит:

Artificial intelligence based systems and methods are described for analyzing pixel data of an image of a user's body for determining a hair density value of the user's hair. An example method includes aggregating a plurality of training images of a plurality of users' bodies, and training a hair density model operable to output a hair density value. The example method may further include receiving an image of a user comprising pixel data of a portion of the user's body or body area, and analyzing, by the hair density model, the image to determine a user-specific hair density value of the user's hair. The example method may further include generating a product recommendation, wherein the product is designed to address a feature identifiable within the pixel data of the user's body or body area. 1. A digital imaging method of analyzing pixel data of at least one image of a user's body for determining a user-specific hair density value of the user's hair , the digital imaging method comprising the steps of:a. aggregating, at one or more processors communicatively coupled to one or more memories, a plurality of training images of a plurality of users, each of the training images comprising pixel data of a respective user's body or body area;b. training, by the one or more processors with the pixel data of the plurality of training images, a hair density model comprising a hair density scale and operable to output, across a range of the hair density scale, hair density values associated with a degree of hair density ranging from least dense to most dense;c. receiving, at the one or more processors, at least one image of a user, the at least one image captured by a digital camera, and the at least one image comprising pixel data of the user's body or body area;d. analyzing, by the hair density model executing on the one or more processors, the at least one image captured by the digital camera to determine a user-specific hair density value of the user's hair; ande. ...

Подробнее
06-01-2022 дата публикации

DIGITAL IMAGING SYSTEMS AND METHODS OF ANALYZING PIXEL DATA OF AN IMAGE OF A USER'S BODY FOR DETERMINING A USER-SPECIFIC SKIN IRRITATION VALUE OF THE USER'S SKIN AFTER REMOVING HAIR

Номер: US20220005195A1
Принадлежит:

Digital imaging systems and methods are described for determining a user-specific skin irritation value of a user's skin after removing hair. An example method may be performed by one or more processors and may include aggregating training images comprising pixel data of skin of individuals after removing hair. A skin irritation model may be trained using the training images to output skin irritation values associated with a degree of skin irritation from least to most irritation. The method may include receiving an image of a user including pixel data of the user's skin after hair is removed from the skin, analyzing the image using the skin irritation model to determine a user-specific skin irritation value, generating a user-specific recommendation designed to address a feature identifiable within the pixel data of the user's skin, and rendering the recommendation on a display screen of a user computing device. 1. A digital imaging method of analyzing pixel data of an image of a user's body for determining a skin irritation value of the user's skin after removing hair , the digital imaging method comprising the steps of:a. aggregating, at one or more processors communicatively coupled to one or more memories, a plurality of training images from a plurality of individuals, each of the training images comprising pixel data of skin of a respective individual after removing hair;b. training, by the one or more processors with the pixel data of the plurality of training images, a skin irritation model comprising a skin irritation scale and operable to output, across a range of the skin irritation scale, skin irritation values associated with a degree of skin irritation ranging from least irritation to most irritation;c. receiving, at the one or more processors, at least one image of a user, the at least one image captured by a digital camera, and the at least one image comprising pixel data of at least a portion of the user's skin after hair is removed from the at ...

Подробнее
06-01-2022 дата публикации

DIGITAL IMAGING SYSTEMS AND METHODS OF ANALYZING PIXEL DATA OF AN IMAGE OF A USER'S BODY FOR DETERMINING A HAIR GROWTH DIRECTION VALUE OF THE USER'S HAIR

Номер: US20220005218A1
Принадлежит:

Artificial intelligence based systems and methods are described for analyzing pixel data of an image of a user's body for determining a hair growth direction value of the user's hair. An example method includes aggregating a plurality of training images of a plurality of users' bodies, and training a hair growth direction model operable to output a hair growth direction value. The example method may further include receiving an image of a user comprising pixel data of a portion of the user's body or body area, and analyzing, by the hair growth direction model, the image to determine a user-specific hair growth direction value of the user's hair. The example method may further include generating a product recommendation, wherein the product is designed to address a feature identifiable within the pixel data of the user's body or body area. 1. A digital imaging method of analyzing pixel data of at least one image of a user's body for determining a user-specific hair growth direction value of the user's hair , the digital imaging method comprising the steps of:a. aggregating, at one or more processors communicatively coupled to one or more memories, a plurality of training images of a plurality of users, each of the training images comprising pixel data of a respective user's body or body area;b. training, by the one or more processors with the pixel data of the plurality of training images, a hair growth direction model comprising a hair growth direction map and operable to output, across a range of the hair growth direction map, hair growth direction values associated with a hair growth direction ranging from upward to downward;c. receiving, at the one or more processors, at least one image of a user, the at least one image captured by a digital camera, and the at least one image comprising pixel data of the user's body or body area;d. analyzing, by the hair growth direction model executing on the one or more processors, the at least one image captured by the digital ...

Подробнее
01-01-2015 дата публикации

MOTION INFORMATION PROCESSING APPARATUS

Номер: US20150003687A1
Принадлежит:

A motion information processing apparatus according to an embodiment includes an obtaining unit, a designating operation receiving unit, an analyzing unit, and a display controlling unit. The obtaining unit obtains motion information of a subject who performs a predetermined motion. The designating operation receiving unit receives an operation to designate a site of the subject. The analyzing unit calculates an analysis value related to a movement of the designated site by analyzing the motion information. The display controlling unit displays display information based on the analysis value related to the movement of the designated site. 1. A motion information processing apparatus comprising:an obtaining unit configured to obtain motion information of a subject who performs a predetermined motion;a designating operation receiving unit configured to receive an operation to designate a site of the subject;an analyzing unit configured to calculate an analysis value related to a movement of the designated site by analyzing the motion information; anda display controlling unit configured to display display information based on the analysis value related to the movement of the designated site.2. The motion information processing apparatus according to claim 1 , wherein the display controlling unit exercises control so as to display claim 1 , as information used for causing the site of the subject to be designated claim 1 , information in which a designating part is arranged with at least one of image information of the subject and human body model information.3. The motion information processing apparatus according to claim 2 , further comprising: a receiving unit configured to receive an operation to arrange the designating part in an arbitrary position of the image information of the subject and the human body model information claim 2 , whereinthe display controlling unit exercises control so as to display the information in which the designating part is arranged in ...

Подробнее
06-01-2022 дата публикации

Joint Use of Face, Motion, and Upper-body Detection in Group Framing

Номер: US20220006974A1
Принадлежит:

A videoconferencing endpoint is described that uses a combination of face detection, motion detection, and upper body detection for selecting participants of a videoconference for group framing. Motion detection is used to remove fake faces as well as to detect motion in regions around detected faces during postprocessing. Upper body detection is used in conjunction with the motion detection in postprocessing to allow saving faces that have been initially detected by face detection for group framing even if the participant has turned away from the camera, allowing the endpoint to keep tracking the participants region better than would be possible based only on an unstable result coming from face detection. 1. A method of framing a group of participants in a videoconference , comprising:receiving video data from a camera of a videoconferencing endpoint;performing face detection on the video data;saving detected faces for a first threshold time period; performing a first type of motion detection on regions around the saved detected faces;', 'performing upper body detection on regions around the saved detected faces responsive to not detecting motion; and', 'discarding saved detected faces responsive to the first type of motion detection and upper body detection detecting neither motion nor an upper body in the regions around the saved detected faces; and', 'framing the group of participants based on the saved detected faces., 'postprocessing the saved detected faces during the first threshold time period, wherein postprocessing the saved detected faces comprises2. The method of claim 1 , further comprising:performing a second type of motion detection on the detected faces; andeliminating faces from the detected faces responsive to detecting no motion in the detected faces.3. The method of claim 2 , wherein where the second type of motion detection is stricter than the first type of motion detection.4. The method of claim 1 , wherein the first type of motion detection ...

Подробнее
05-01-2017 дата публикации

MARINE ANIMAL DATA CAPTURE AND AGGREGATION DEVICE

Номер: US20170003160A1
Принадлежит:

Disclosed is an apparatus incorporating a data capture method involving capturing an image and weighing marine animals during or after harvest. Data captured may be compared against a set of preconfigured rules, the results of which may aid in optimizing a fisherman's workflow by alerting the fisherman of the presence of characteristics of the marine animals that violate any of the set of preconfigured rules, such as bycatch, endangered, or breeding animals. As such, the apparatus may improve operational efficiency, reduce overhead costs, promote transparent fishing practices, and provide catch-to-plate data to various stakeholders. The apparatus also enables an economic model that incentivizes fishermen to use the apparatus by facilitating the remittance of micropayments in exchange for data captured through the apparatus. The apparatus offers a consistent, reliable means of collecting and aggregating complete data from marine animals while at the same time aiding fishermen compliance with regulations. 1. An apparatus , comprising:a processor;a memory; detects a weight differential through the one or more sensors;', 'upon detecting the weight differential, measures a weight and captures an image through the one or more sensors; and', 'based on the captured image, compares one or more characteristics derived from the image or the weight to one or more preconfigured rules of the processor., 'wherein the processor is configured to execute a set of instructions stored in the memory, the set of instructions causing the apparatus to perform a data capture session in which the apparatus, 'one or more sensors;'}2. The apparatus of claim 1 , wherein the data capture session further comprises detecting a violation of any of the one or more preconfigured rules.3. The apparatus of claim 2 , wherein the data capture session further comprises:upon detecting a violation, communicating a notification through one or more output devices communicatively coupled to the processor.4. ...

Подробнее
13-01-2022 дата публикации

Detecting the Presence of Pests Using Networked Systems

Номер: US20220007630A1
Принадлежит: Spotta Limited

A system for detecting the presence of pests, the system comprising: at least one remote camera system, each of the at least one remote camera system comprising an image capture device coupled to a processor and a wireless transmitter, each of the at least one remote camera system having a pest detection surface and being configured to: capture one or more image of the pest detection surface; process the captured one or more image to recognise the potential presence of a target pest on the pest detection surface; and transmit data from the captured one or more image in response to recognising the potential presence of one or more target pests; and a server configured to: receive the transmitted data; process the transmitted data to verify the potential presence of the target pest on the pest detection surface; and provide an output indicating the verification. 126.-. (canceled)27. A system for detecting the presence of pests , the system comprising: capture one or more images of the pest detection surface;', 'process the one or more captured images to recognise the potential presence of a target pest on the pest detection surface; and', 'transmit data associated with the one or more captured images in response to recognising the potential presence of the target pest; and, 'at least one remote camera system, each of the at least one remote camera system comprising an image capture device coupled to a processor and a wireless transmitter, each of the at least one remote camera system having a pest detection surface and being configured to receive the transmitted data;', 'process the transmitted data to verify the potential presence of the target pest on the pest detection surface; and', 'provide an output indicating the verification., 'a server configured to281. The system of claim , wherein the at least one remote camera system is further configured to process the one or more captured images to reduce a quantity of data to be transmitted by selecting a data segment ...

Подробнее
04-01-2018 дата публикации

Methods and Systems for Opening of a Vehicle Access Point Using Audio or Video Data Associated with a User

Номер: US20180002972A1
Принадлежит:

Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle. 1. A method comprising:receiving, by a system on a vehicle, audio or video data indicating a user approaching the vehicle;determining, by the system, an intention of the user to access an interior of the vehicle based on the audio or video data indicating a predefined movement of a body part of the user; andopening, by the system, an access point of the vehicle responsive to the determining.2. The method of claim 1 , wherein the determining of the intention of the user based on the audio or video data indicating the predefined movement of the body part of the user comprises detecting an upward movement of shoulders or eyebrows of the user.3. The method of claim 1 , wherein the receiving of the audio or video data indicating the user approaching the vehicle comprises receiving claim 1 , from a camera claim 1 , a video indicating the user approaching the vehicle.4. The method of claim 3 , wherein the determining of the intention of the user comprises analyzing the video to determine whether the user lacks a free hand or is unable to manually open a door at the access point.5. The method of claim 4 , wherein the video indicates the user approaching the vehicle with a cart claim 4 , with a stroller claim 4 , with a walker claim 4 , or with a wheelchair.6. The method of claim 1 , wherein the determining of the intention of the user further comprises detecting a ...

Подробнее
02-01-2020 дата публикации

CONTROLLING AN AUTONOMOUS VEHICLE BASED ON PASSENGER BEHAVIOR

Номер: US20200003570A1
Принадлежит:

An occupant sensor system is configured to collect physiological data associated with occupants of a vehicle and then use that data to generate driving decisions. The occupant sensor system includes physiologic sensors and processing systems configured to estimate the cognitive and/or emotional load on the vehicle occupants at any given time. When the cognitive and/or emotional load of a given occupant meets specific criteria, the occupant sensor system generates modifications to the navigation of the vehicle. In this manner, under circumstances where a human occupant of an autonomous vehicle recognizes specific events or attributes of the environment with which the autonomous vehicle maybe unfamiliar, the autonomous vehicle is nonetheless capable of making driving decisions based on those events and/or attributes. 1. A computer-implemented method for operating an autonomous vehicle , the method comprising:determining a first physiological response of a first occupant of an autonomous vehicle based on first sensor data;determining that the first physiological response is related to a first event outside of the autonomous vehicle; andmodifying at least one operating characteristic of the autonomous vehicle based on second sensor data that corresponds to the first event.2. The computer-implemented method of claim 1 , wherein capturing the first sensor data comprises recording at least one of a body position claim 1 , a body orientation claim 1 , a head position claim 1 , a head orientation claim 1 , a gaze direction claim 1 , a gaze depth claim 1 , a skin conductivity reading claim 1 , and a neural activity measurement.3. The computer-implemented method of claim 1 , wherein the first physiological response comprises at least one of an increase in cognitive load and an increase in emotional load.4. The computer-implemented method of claim 1 , wherein determining that the first physiological response is related to the first event comprises:determining a first position ...

Подробнее
02-01-2020 дата публикации

PEOPLE FLOW ESTIMATION SYSTEM AND THE FAILURE PROCESSING METHOD THEREOF

Номер: US20200004232A1
Автор: Fang Hui, Jia Zhen, Li Xiangbao
Принадлежит:

A human flow estimation system comprises: a sensor network comprising a plurality of sensors arranged in a to-be-estimated region for detecting the human flow; a model building module configured to build a human flow state model based on arrangement positions of the sensors, and build a sensor network model based on data of the sensors; and a human flow estimation module configured to estimate the human flow and provide a data weight of the estimated human flow based on the human flow state model and the sensor network model. The human flow estimation system further comprises a failure detection module configured to detect whether each sensor in the sensor network is abnormal, and the model building module is further configured to adjust the human flow state model and the sensor network model when an exception exists on the sensor. 1. A human flow estimation system , characterized by comprising:a sensor network comprising a plurality of sensors arranged in a to-be-estimated region for detecting the human flow;a model building module configured to build a human flow state model based on arrangement positions of the sensors, and build a sensor network model based on data of the sensors; anda human flow estimation module configured to estimate the human flow and provide a data weight of the estimated human flow based on the human flow state model and the sensor network model,wherein the human flow estimation system further comprises a failure detection module configured to detect whether each sensor in the sensor network is abnormal, and the model building module is further configured to adjust the human flow state model and the sensor network model when an exception exists on the sensor.2. The human flow estimation system according to claim 1 , characterized in that the model building module is configured to reduce a data weight involving a specific sensor in the human flow state model and the sensor network model when the failure detection module determines that an ...

Подробнее
13-01-2022 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND MOBILE OBJECT

Номер: US20220011152A1
Принадлежит:

An information processing apparatus includes an information acquisition interface configured to acquire a measurement result from one or more weight sensors configured to measure a weight of a user, and a controller configured to detect an increase in the weight of the user based on the measurement result. 1. An information processing apparatus comprising:an information acquisition interface configured to acquire a measurement result from one or more weight sensors configured to measure a weight of a user; anda controller configured to detect an increase in the weight of the user based on the measurement result.2. The information processing apparatus according to claim 1 , whereinthe weight of the user includes a body weight of the user and a weight of luggage of the user.3. The information processing apparatus according to claim 1 , whereinthe controller is configured to detect an increase of a predetermined value or greater in the weight of luggage of the user.4. The information processing apparatus according to claim 1 , whereinthe one or more weight sensors include a first weight sensor and a second weight sensor that are installed at different positions on a road surface, andthe controller is configured to extract the measurement result for the weight of the user from respective measurement results of the first weight sensor and the second weight sensor, to detect the increase in the weight of the user.5. The information processing apparatus according to claim 1 , whereinthe information acquisition interface is configured to acquire an image of the user from a camera configured to capture the user, andthe controller is configured to extract the measurement result for the weight of the user from measurement results of the one or more weight sensors based on the image of the user.6. The information processing apparatus according to claim 5 , further comprisinga memory configured to store the measurement results of the one or more weight sensors, whereinthe ...

Подробнее
07-01-2016 дата публикации

IMAGE PROCESSOR WITH EVALUATION LAYER IMPLEMENTING SOFTWARE AND HARDWARE ALGORITHMS OF DIFFERENT PRECISION

Номер: US20160004919A1
Принадлежит:

An image processor comprises image processing circuitry implementing a plurality of processing layers including at least an evaluation layer and a recognition layer. The evaluation layer comprises a software-implemented portion and a hardware-implemented portion, with the software-implemented portion of the evaluation layer being configured to generate first object data of a first precision level using a software algorithm, and the hardware-implemented portion of the evaluation layer being configured to eV generate second object data of a second precision level lower than the first precision level using a hardware algorithm. The evaluation layer further comprises a signal combiner configured to combine the first and second object data to generate output object data for delivery to the recognition layer. By way of example only, the evaluation layer may be implemented in the form of an evaluation subsystem of a gesture recognition system of the image processor. 1. An image processor comprising:image processing circuitry implementing a plurality of processing layers including at least an evaluation layer and a recognition layer;the evaluation layer comprising a software-implemented portion and a hardware-implemented portion;the software-implemented portion of the evaluation layer being configured to generate first object data of a first precision level using a software algorithm;the hardware-implemented portion of the evaluation layer being configured to generate second object data of a second precision level lower than the first precision level using a-hardware algorithm;wherein the evaluation layer further comprises a signal combiner configured to combine the first and second object data to generate output object data for delivery to the recognition layer.2. The image processor of wherein the evaluation layer comprises an evaluation subsystem of a gesture recognition system.3. The image processor of wherein the plurality of processing layers further comprises a ...

Подробнее
07-01-2016 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD WHICH LEARN DICTIONARY

Номер: US20160004935A1
Принадлежит:

An image processing apparatus includes a plurality of dictionaries configured to store a feature of an object and information on an imaging direction in a scene for each kind of imaged scene, a detecting unit configured to detect an object with reference to at least one of the plurality of dictionaries in the scene in which the object has been imaged and which is to be learned, an estimating unit configured to estimate the imaging direction the detected object, a selecting unit configured to select one dictionary from the plurality of dictionaries based on the imaging direction estimated by the estimating unit and the information on the imaging direction in each of the plurality of dictionaries, and a learning unit configured to learn the dictionary selected by the selecting unit, based on a detection result produced by the detecting unit. 1. An image processing apparatus comprising:a plurality of dictionaries configured to store information of a feature and an imaging direction of an object in scenes of imaging, per each kind of the scenes;a detecting unit configured to detect the object from the scene in which the object is imaged and is subjected to a learning, by reference to at least one of the plurality of dictionaries;an estimating unit configured to estimate the imaging direction of the object detected;a selecting unit configured to select a dictionary from the plurality of dictionaries, based on the imaging direction estimated by the estimating unit and the information of the imaging direction stored in each of the plurality of dictionaries; anda learning unit configured to perform a learning of the selected dictionary based on a result of the detection by the detecting unit.2. The image processing apparatus according to claim 1 , whereinthe selecting unit derives a direction adaptability of each of the plurality of dictionaries based on the imaging direction estimated by the estimating unit and the information of the imaging direction stored in each of the ...

Подробнее
07-01-2016 дата публикации

Information Processing Method And Device

Номер: US20160005142A1
Автор: SHEN Hao
Принадлежит: Lenovo (Beijing) Co., Ltd.

An information processing method and device are disclosed. The information processing method is applied to an information processing device in which a 3D map and a spatial topological structure management-based feature library created in advance for a certain environment are contained, and different users in the certain environment can determine their location. The method includes acquiring a first image taken by a first user; extracting one or more first feature points in the first image to obtain first feature descriptors; obtaining 3D locations of the first feature points based on 3D location of the first user, the first image, and the feature library; determining feature descriptors to be updated based on 3D location of the first user, the 3D locations of the first feature points, the first feature descriptors corresponding to the first feature points, and existing feature descriptors in the feature library; and updating the feature library. 1. An information processing method applied to an information processing device in which a 3D map and a spatial topological structure management-based feature library created in advance for a certain environment is contained , and different users in the certain environment are able to determine their location in accordance with images taken by themselves and the feature library , the method comprising:acquiring a first image taken by a first user;extracting one or more first feature points in the first image to obtain first feature descriptors for characterizing the first feature points;obtaining 3D locations of the first feature points based on a 3D location of the first user, the first image, and the feature library;determining feature descriptors to be updated based on the 3D location of the first user, the 3D locations of the first feature points, the first feature descriptors corresponding to the first feature points, and existing feature descriptors in the feature library; andupdating the feature library based on the ...

Подробнее
13-01-2022 дата публикации

MULTI-USER INTELLIGENT ASSISTANCE

Номер: US20220012470A1
Принадлежит: Microsoft Technology Licensing, LLC

An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user. 1. An intelligent assistant computer , comprising:a logic machine; anda storage machine holding instructions executable by the logic machine to:recognize another intelligent assistant computer;record speech spoken by a first user;determine a self-selection score for the first user based on the speech spoken by the first user;receive a remote-selection score for the first user from the other intelligent assistant computer;if the self-selection score is greater than the remote-selection score, respond to the first user, determine a disengagement metric of the first user based on recorded speech spoken by the first user, and block subsequent responses to all other users until the disengagement metric of the first user exceeds a blocking threshold;if the self-selection score is less than the remote-selection score, do not respond to the first user; andstop blocking subsequent responses to another user responsive to a new self-selection score for the first user being less than a new remote-selection score for the first user.2. The intelligent assistant computer of claim 1 , wherein the self-selection score is determined based further on a signal-to-noise ratio of recorded speech spoken by the first user. ...

Подробнее
13-01-2022 дата публикации

FISH BIOMASS, SHAPE, AND SIZE DETERMINATION

Номер: US20220012479A1
Принадлежит:

Methods, systems, and apparatuses, including computer programs encoded on a computer-readable storage medium for estimating the shape, size, and mass of fish are described. A pair of stereo cameras may be utilized to obtain right and left images of fish in a defined area. The right and left images may be processed, enhanced, and combined. Object detection may be used to detect and track a fish in images. A pose estimator may be used to determine key points and features of the detected fish. Based on the key points, a three-dimensional (3-D) model of the fish is generated that provides an estimate of the size and shape of the fish. A regression model or neural network model can be applied to the 3-D model to determine a likely weight of the fish. 1. (canceled)2. A computer-implemented method comprising:determining, by a camera system, that an image that was generated by the camera system is of a fish that is at least partially occluded;determining, by the camera system, a respective position of each of one or more key points of the fish that are visible in the image;determining, by the camera system, a respective position of each of one or more key points of the fish that are occluded within the image based at least on the determined respective position of each of the one or more key points of the fish that are visible in the image; andgenerating, by the camera system, a weigh estimate of the fish that is at least partially occluded based on the determined respective position of each of the one or more key points of the fish that are occluded within the image; andproviding, by the camera system, the weight estimate of the fish for output.3. The method of claim 2 , wherein the camera system is an underwater stereo camera system.4. The method of claim 2 , wherein the one or more key points of the fish that are occluded within the image include an eye claim 2 , a nostril claim 2 , or an operculum.5. The method of claim 2 , wherein the respective position of each of the ...

Подробнее
13-01-2022 дата публикации

DEVICE AND METHOD FOR GENERATING SUMMARY VIDEO

Номер: US20220012500A1
Принадлежит:

A method for generating a summary video includes generating a user emotion graph of a user watching a first video. The method also includes obtaining a character emotion graph for a second video, by analyzing an emotion of a character in a second video that is a target of summarization. The method further includes obtaining an object emotion graph for an object in the second video, based on an object appearing in the second video. Additionally the method includes obtaining an image emotion graph for the second video, based on the character emotion graph and the object emotion graph. The method also includes selecting at least one first scene in the second video by comparing the user emotion graph with the image emotion graph. The method further includes generating the summary video of the second video, based on the at least one first scene. 1. A method , performed by a device , of generating a summary video , the method comprising:obtaining a user image in which a user watching a first video is photographed, during playback of the first video;generating a user emotion graph of the user watching the first video, by analyzing an emotion of the user in the obtained user image;obtaining a character emotion graph for a second video, by analyzing an emotion of a character in the second video that is a target of summarization;obtaining an object emotion graph for an object in the second video, based on the object appearing in the second video;obtaining an image emotion graph for the second video, based on the character emotion graph and the object emotion graph;selecting at least one first scene in the second video by comparing the user emotion graph of the user that watched the first video with the image emotion graph for the second video; andgenerating the summary video of the second video, based on the at least one first scene.2. The method of claim 1 , further comprising:selecting at least one second scene in the second video, based on emotion scores in the image ...

Подробнее
07-01-2021 дата публикации

QUANTIZED TRANSITION CHANGE DETECTION FOR ACTIVITY RECOGNITION

Номер: US20210004575A1
Принадлежит:

A system for recognizing human activity from a video stream includes a classifier for classifying an image frame of the video steam in one or more classes and generating a class probability vector for the image frame based on the classification. The system further includes a data filtering and binarization module for filtering and binarizing each probability value of the class probability vector based on a pre-defined probability threshold value. The system furthermore includes a compressed word composition module for determining one or more transitions of one or more classes in consecutive image frames of the video stream and generating a sequence of compressed words based on the determined one or more transitions. The system furthermore includes a sequence dependent classifier for extracting one or more user actions by analyzing the sequence of compressed words to and recognizing human activity therefrom. 1. A system for recognizing human activity from a video stream captured by an imaging device , the system comprising:a memory to store one or more instructions; and [ classify an image frame of the video steam in one or more classes of a set of pre-defined classes, wherein the image frame is classified based on user action in a region of interest of the image frame; and', 'generate a class probability vector for the image frame based on the classification, wherein the class probability vector includes a set of probabilities of classification of the image frame in each pre-defined class;, 'a classifier communicatively coupled to the imaging device, and configured to, 'a data filtering and binarization module configured to filter and binarize each probability value of the class probability vector based on a pre-defined probability threshold value;', 'a compressed word composition module configured to:', 'determine one or more transitions of one or more classes in one or more consecutive image frames of the video stream, based on corresponding binarized probability ...

Подробнее
07-01-2021 дата публикации

Method and device for the characterization of living specimens from a distance

Номер: US20210004577A1
Автор: Ivan Amat Roldan
Принадлежит: Touchless Animal Metrics SL

A method and a device for the characterization of living specimens from a distance are disclosed. The method comprises: acquiring an image of a living specimen and segmenting the image, providing a segmented image; measuring a distance to several parts of said image, providing several distance measurements, and selecting a subset of those contained in the segmented image. The method also processes the segmented image and the distance measurements referred to different positions contained within the segmented image by characterizing the shape and the depth of the living specimen and by comparing a shape analysis map and a depth profile analysis map. If a result of the comparison is comprised inside a given range, parameters of the living specimen are further determined including posture parameters, location or correction of anatomical reference points and/or body size parameters, and/or a body map of the living specimen is represented.

Подробнее
07-01-2021 дата публикации

Method and apparatus for determining (raw) video materials for news

Номер: US20210004603A1
Автор: Daming Lu, Hao Tian
Принадлежит: Baidu USA LLC

The present disclosure discloses a method and apparatus for determining video material of news. The method for determining video material of news comprises: recognizing a person name in a news text; searching a video based on the person name, to obtain a to-be-selected video; extracting a key frame in the to-be-selected video; recognizing a person in the key frame to obtain identity information of the person; and determining the to-be-selected video as video material of news, in response to the identity information of the person conforming to the person name. The present disclosure improves the consistency between the video material of the news and the news text.

Подробнее
07-01-2021 дата публикации

INFORMATION OUTPUT METHOD, INFORMATION OUTPUT DEVICE, AND PROGRAM

Номер: US20210004621A1
Принадлежит:

An information output method in an information output device acquires first information on an operation history of one or more devices operated by one or more users, acquires second information that identifies a user detected in the vicinity of one or more output devices, acquires third information on behavior of the user, identifies a device whose state is changed or whose state is changeable within a predetermined time among the one or more devices and an operator who performs operation relating to the change in the state based on the first information, determines an output mode and content of notification information on the identified device to the detected user based on information on the identified device and operator, the second information, and the third information, and outputs, in the determined output mode, notification information having the determined content to one or more output devices that detect the detected user. 1. An information output method in an information output device that outputs information to one or more output devices used by one or more users , the information output method comprising:acquiring first information on an operation history of one or more devices operated by the one or more users;performing processing of acquiring second information that identifies a user detected in a vicinity of the one or more output devices;acquiring third information on behavior of the user detected in the vicinity of the one or more output devices;identifying a device whose state is changed or whose state is changeable within a predetermined time among the one or more devices and an operator who performs operation relating to the change in the state based on the first information;determining an output mode and content of notification information on the identified device to the detected user based on information on the identified device and operator, the second information, and the third information; andoutputting, in the determined output mode, ...

Подробнее
04-01-2018 дата публикации

Tile Image Based Scanning for Head Position for Eye and Gaze Tracking

Номер: US20180005010A1
Принадлежит:

An eye tracking method comprising: capturing image data by an image sensor; determining a region of interest as a subarea or disconnected subareas of said sensor which is to be read out from said sensor to perform an eye tracking based on the read out image data; wherein said determining said region of interest comprises: a) initially reading out only a part of the area of said sensor; b) searching the image data of said initially read out part for one or more features representing the eye position and/or the head position of a subject to be tracked; c) if said search for one or more features has been successful, determining the region of interest based on the location of the successfully searched one or more features, and d) if said search for one or more features has not been successful, reading out a further part of said sensor to perform a search for one or more features representing the eye position and/or the head position based on said further part. 1. A method comprising:retrieving first image data corresponding to a first portion of an image sensor;searching the first image data for one or more tracking features representing the eye position and/or the head position of a subject to be tracked;determining, based on searching the first image data, that the first image data lacks the one or more tracking features;in response to determining that the first image data lacks the one or more tracking features, retrieving second image data corresponding to a second portion of the image sensor, wherein the second portion is different than the first portion; andsearching at least the second image data for the one or more tracking features.2. The method of claim 1 , wherein the first portion of the image sensor is selected based on a location of one or more previously detected tracking features.3. The method of claim 1 , wherein the first portion of the image sensor is selected independent of locations of previously detected tracking features.4. The method of claim 1 , ...

Подробнее
04-01-2018 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR DETECTING OBJECT FROM IMAGE

Номер: US20180005016A1
Автор: NAKASHIMA Daisuke
Принадлежит:

An image processing apparatus includes: an input unit configured to input image data; a detection unit configured to execute a detection process that detects a plurality of objects from the input image data; an integration unit configured to, after the detection process ends, integrate the plurality of detected objects on the basis of respective positions of the plurality of detected objects in the image data; an estimation unit configured to, before the detection process ends, estimate an integration time required for the integration unit to integrate the plurality of detected objects; and a termination unit configured to terminate the detection process by the detection unit on the basis of the estimated integration time and an elapsed time of the detection process by the detection unit. 1. An image processing apparatus , comprising:an input unit configured to input image data;a detection unit configured to execute a detection process that detects a plurality of objects from the input image data;an integration unit configured to, after the detection process ends, integrate the plurality of detected objects on the basis of respective positions of the plurality of objects in the image data;an estimation unit configured to, before the detection process ends, estimate an integration time required for the integration unit to integrate the plurality of detected objects; anda termination unit configured to terminate the detection process by the detection unit on the basis of the estimated integration time and an elapsed time of the detection process by the detection unit.2. The image processing apparatus according to claim 1 , wherein the termination unit terminates the detection process by the detection unit in a case where a total time obtained by adding the integration time and the elapsed time is equal to or longer than a predetermined time.3. The image processing apparatus according to claim 1 , wherein the estimation unit estimates claim 1 , at a point of time ...

Подробнее
04-01-2018 дата публикации

SYSTEM AND METHOD FOR PARTIALLY OCCLUDED OBJECT DETECTION

Номер: US20180005025A1
Принадлежит:

A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score. 1. A computer-implemented method for partially occluded object detection , comprising:obtaining a response map for a detection window of an input image, wherein the response map is based on a trained model and the response map includes a root layer and a parts layer;determining visibility flags for each root cell of the root layer and each part of the parts layer based on the response map, wherein the visibility flag is one of visible or occluded;determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded, wherein the occlusion penalty is based on a location of the root cell or the part with respect to the detection window;determining a detection score for the detection window based on the visibility flags and the occlusion penalties; andgenerating an estimated visibility map for object detection based on the detection score.2. The computer-implemented method of claim 1 , wherein the trained model is a deformable parts model.3. The computer-implemented method of claim 1 , wherein obtaining the response map comprises determining a ...

Подробнее
04-01-2018 дата публикации

Systems and Methods of Updating User Identifiers in an Image-Sharing Environment

Номер: US20180005062A1
Автор: Aguera-Arcas Blaise
Принадлежит:

Computer-implemented methods and systems of updating user identifiers in an image-sharing environment include features for facilitating blocking, permitting, sharing and/or modifying content such as images and videos. User identification vectors providing data representative of a user and information about one or more facial characteristics of the user are broadcasted by a modular computing device. Information about one or more additional characteristics of the user (e.g., body characteristics and/or contextual characteristics) as determined from images of the user obtained by one or more image capture devices are received. An updated user identification vector including the information about one or more additional characteristics of the user is stored at and subsequently broadcasted by the modular computing device. 1. A computer-implemented method of updating user identifiers in an image-sharing environment , comprising:broadcasting, by at least one modular computing device, a user identification vector providing data representative of a user and information about one or more facial characteristics of the user,receiving, by the at least one modular computing device, information about one or more additional characteristics of the user, wherein the information about one or more additional characteristics of the user is determined from images of the user obtained by one or more image capture devices;storing, by the at least one modular computing device, an updated user identification vector including the information about one or more additional characteristics of the user; andbroadcasting, by the at least one modular computing device, the updated user identification vector.2. The computer-implemented method of claim 1 , wherein the information about one or more additional characteristics of the user comprises data identifying body characteristics of the user including one or more of a user's current clothes claim 1 , accessories claim 1 , body shape claim 1 , body ...

Подробнее
02-01-2020 дата публикации

APPARATUS AND METHOD FOR DETECTING PROXIMITY OF USER

Номер: US20200005030A1
Автор: LEE Kye Chul
Принадлежит: LG ELECTRONICS INC.

According to an embodiment of the present disclosure, a user proximity detection device includes a receiver configured to receive a signal in a target frequency band, an inference-purpose data generator configured to measure an intensity of a signal received through the receiver and generate inference-purpose data based on the measured intensity of the signal, and a proximity detector configured to input the inference-purpose data into a human body proximity inference machine learning model to determine whether a human body is in proximity. The target frequency band is selected from a broadcast frequency band. At least one of an autonomous vehicle a user terminal or a server according to an embodiment of the present disclosure may be linked or converged with an artificial intelligence module, an Unmanned Aerial Vehicle (UAV), a robot, an Augmented Reality (AR) device, Virtual Reality (VR), and devices related to 5G service. 1. A user proximity detection device comprising:a receiver configured to receive a signal in a target frequency band;an inference-purpose data generator configured to measure an intensity of a signal received through the receiver and generate inference-purpose data based on the measured intensity of the signal; anda proximity detector configured to input the inference-purpose data into a human body proximity inference machine learning model to determine whether a human body is in proximity,wherein the target frequency band is selected from a broadcast frequency band.2. The user proximity detection device of claim 1 , further comprising a first target frequency selector configured to obtain data through frequency hopping claim 1 , perform Forward Discrete Fourier Transform on the obtained data to obtain a signal intensity value claim 1 , and select the target frequency band based on the obtained signal intensity value.3. The user proximity detection device of claim 2 , wherein as local information is obtained claim 2 , the first target frequency ...

Подробнее
02-01-2020 дата публикации

LEFT OBJECT DETECTING SYSTEM

Номер: US20200005044A1
Автор: Nakamura Kohta
Принадлежит:

A system according to an embodiment includes an analyzing device that includes a first database to store image analysis information identifying a person and an object, and determines that the object has been left behind, by using the image analysis information and performing image analysis on video footage captured by cameras installed in locations, associating the identified person with an object carried by the person, and comparing, at timings, the video footage captured by the cameras; and a communication device that includes a second database to store, in association with each other, usage to information and ID information of each of users, and transmit, when the analyzing device has determined that the object has been left behind, an alert to the user or a predetermined destination of notification associated with the object left behind. 1. A left object detecting system , comprising:an image analyzing device that includes a first database configured to store at least image analysis information identifying a person and an object in an image, and determines that the object has been left behind, by using the image analysis information and performing image analysis on video footage captured by cameras installed in a plurality of locations, associating the identified person with an object carried by the person, and comparing, at a plurality of timings, the video footage captured by the cameras; anda communication device that includes a second database configured to store, in association with each other, usage information and ID information of each of a plurality of users, and transmit, when the image analyzing device has determined that the object has been left behind, an alert to the user or a predetermined destination of notification associated with the object left behind.2. The left object detecting system according to claim 1 , wherein the image analyzing device further comprises:a video footage information gathering unit configured to gather a plurality of the ...

Подробнее
02-01-2020 дата публикации

VISION-2-VISION CONTROL SYSTEM

Номер: US20200005076A1
Автор: DILL DON K.
Принадлежит:

A method for controlling an object space having an associated object environment includes the steps of, defining a target set of coordinates in the object space, recognizing the presence of a predetermined object in the object space, and determining a coordinate location of the recognized predetermined object in the object space. The method further includes determining the spatial relationship between the recognized predetermined object and the target set of coordinates, comparing the spatial relationship with predetermined spatial relationship criteria, and if the determined spatial relationship criteria falls within the predetermined spatial relationship criteria, modifying the object space environment. 120-. (canceled)21. A method for controlling an environment using an audio stream , the method comprising:receiving at least one audio stream;determining audio characteristics of the at least one audio stream;creating one or more audio attributes based on the audio characteristics of the at least one audio stream;mapping the one or more audio attributes to at least one control device; andcontrolling the at least one control device based on the one or more audio attributes to modify an object space environment.22. The method of claim 21 , wherein the at least one audio stream is received over a data network.23. The method of claim 21 , wherein the at least one audio stream is received from at least one audio source.24. The method of claim 23 , further comprising receiving one or more labels corresponding to the at least one audio source or corresponding to a type of the at least one audio source.25. The method of claim 24 , wherein the one or more labels are used in creating the one or more audio attributes and mapping the one or more audio attributes to the at least one control device.26. The method of claim 21 , wherein the one or more audio attributes include at least one of beat claim 21 , pitch or frequency claim 21 , key claim 21 , time claim 21 , volume claim ...

Подробнее
02-01-2020 дата публикации

Systems and Methods of Person Recognition in Video Streams

Номер: US20200005079A1
Принадлежит:

The various implementations described herein include systems and methods for recognizing persons in video streams. In one aspect, a method includes: (1) obtaining a live video stream; (2) detecting person(s) in the stream; and (3) determining, from analysis of the live video stream, first information of the detected person(s); (4) determining, based on the first information, that the first person is not known to the computing system; (5) in accordance with the determination that the first person is not known: (a) storing the first information; and (b) requesting a user to classify the first person; and (6) in accordance with a determination that a response was received classifying the first person as a stranger, deleting the stored first information. 1. A method comprising: obtaining a live video stream;', 'detecting a first person in the live video stream;', 'determining, from analysis of the live video stream, first information that identifies an attribute of the first person;', 'determining, based on at least some of the first information, that the first person is not a known person to the computing system;', storing at least some of the first information; and', 'requesting a user to classify the first person; and, 'in accordance with the determination that the first person is not a known person, 'in accordance with (i) a determination that a response was not received from the user, or (ii) a determination that a response was received from the user classifying the first person as a stranger, deleting the stored first information., 'at a computing system having one or more processors and memory2. The method of claim 1 , wherein determining the first information comprises:selecting one or more images of the first person from the live video stream; andcharacterizing a plurality of features of the first person based on the one or more images.3. The method of claim 2 , further comprising:identifying a pose of the first person in each of the one or more images; andfor ...

Подробнее
07-01-2021 дата публикации

SYSTEM AND METHOD FOR INTERVIEW TRAINING WITH TIME-MATCHED FEEDBACK

Номер: US20210004768A1
Принадлежит:

The present disclosure generally relates to interview training and providing interview feedback. An exemplary method comprises: at an electronic device that is in communication with a display and one or more input devices: receiving, via the one or more input devices, media data corresponding to a user's responses to a plurality of prompts; analyzing the media data; and while displaying, on the display, a media representation of the media data, displaying a plurality of analysis representations overlaid on the media representation, wherein each of the plurality of analysis representations is associated with an analysis of content located at a given time in the media representation and is displayed in coordination with the given time in the media representation. 115-. (canceled)16. A computer-implemented method , comprising:recording data comprising speech by a user;analyzing the recorded data to calculate a metric related to the speech;determining that the calculated metric exceeds a predefined threshold; an interactive representation of the recorded data for providing a playback of the recorded data;', 'the calculated metric; and', 'an indication of whether the calculated metric exceeds the predefined threshold., 'rendering a user interface comprising17. The method of claim 16 , further comprising: rendering an overall analysis score claim 16 , wherein the overall analysis score is at least partially based on the calculated metric.18. The method of claim 16 , wherein the recorded data comprises audio data claim 16 , image data claim 16 , video data claim 16 , or any combination thereof.19. The method of claim 16 , wherein the metric related to the speech comprises a number of words per minute.20. The method of claim 16 , wherein the indication of whether the calculated metric exceeds the predefined threshold comprises an indication of whether the speech is of an appropriate talking speed.21. The method of claim 16 , wherein the indication of whether the calculated ...

Подробнее
02-01-2020 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20200005101A1
Автор: TSUJI Ryosuke
Принадлежит:

An image processing apparatus comprises a detection circuit that, by referencing dictionary data acquired by machine learning corresponding to a target photographic subject to be detected in an obtained image, detects the target photographic subject; a selection unit that selects one of a plurality of dictionary data items corresponding to the target photographic subject; and a control circuit that, in a case where a detection evaluation value in a case where the photographic subject is detected by using the dictionary data selected by the selection unit is lower than a predetermined value, controls the detection circuit to detect the target photographic subject by using the selected dictionary data and dictionary data different to the selected dictionary data. 1. An image processing apparatus comprising:at least one processor or circuit configured to function as the following units:an analysis unit configured to select one among a plurality of dictionary data items, and by using the selected dictionary data, to perform analysis of an obtained image,whereinthe plurality of dictionary data items includes at least first dictionary data and second dictionary data, andthe analysis unit,even in a case where a detection score for the photographic subject obtained using the first dictionary data is lower than a threshold or the photographic subject cannot be detected using the first dictionary data, performs image analysis again by using the first dictionary data, andin a case where a detection score for the photographic subject obtained using the second dictionary data is lower than a threshold or the photographic subject cannot be detected using the second dictionary data, performs image analysis again by using dictionary data different to the second dictionary data.2. The image processing apparatus according to claim 1 , whereinthe dictionary data different to the second dictionary data is the first dictionary data.3. The image processing apparatus according to claim 1 ...

Подробнее
02-01-2020 дата публикации

VACANCY MANAGEMENT SYSTEM, VACANCY MANAGEMENT METHOD, AND PROGRAM

Номер: US20200005199A1
Автор: SUGAYA Shunji
Принадлежит:

An object of the present invention is to provide a vacancy management system, a vacancy management method, and a program that improve convenience while suppressing costs. A vacancy management system for managing a vacant seat by performing image analysis on a captured image captured by a camera acquires the captured image, performs image analysis on the acquired, captured image, detects how many chairs exist in a table based on an analysis result of the image analysis, detects whether a person is sitting in the chairs based on the analysis result of the image analysis, and a vacant seat information display unit that displays vacant seat information based on a number of detected chairs and presence or absence of the detected person. 1. A vacancy management system that manages a vacant seat by performing image analysis on a captured image captured by a camera , the vacancy management system comprising:a captured image acquiring unit that acquires the captured image;an image analysis unit that performs image analysis on the acquired, captured image;a first detecting unit that detects how many chairs exist in a table based on an analysis result of the image analysis;a second detecting unit that detects whether a person is sitting in the chairs based on the analysis result of the image analysis;a third detecting unit that detects attribute information of the person sitting in the chairs based on the analysis result of the image analysis, the attribute information including order details and a staying time;an estimating unit that estimates a remaining staying time of a customer from a number of persons in one table, a detecting result of the order details and the staying time, and a detecting result of gender of the person sitting in the chairs, by referring to data regarding a staying time according to a number of customers, a staying time according to order details, and a staying time according to gender of a customer, for customers up to now; anda vacant seat ...

Подробнее
04-01-2018 дата публикации

Image data detection for micro-expression analysis and targeted data services

Номер: US20180005272A1
Принадлежит: PayPal Inc

There are provided systems and methods for image data detection for micro-expression analysis and targeted data services. A user may utilize a communication device, where during use of the communication device, one or more images of the user are captured by the communication device. The image(s) may be analyzed to identify at least one micro-expression of the user during use of the communication device, for example, at a time of performing online purchasing or messaging with other merchants or users. The micro-expression may then be used to determine a user state for the user at the time of use of the communication device, where an action or process may be executed with the communication device in response to the user state. In various embodiments, a service provider may be utilized to determine the user's state based on the micro- expression and execute an action based on the user's state.

Подробнее
07-01-2021 дата публикации

METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION

Номер: US20210004942A1
Автор: Zhang Long, Zhou Wei, Zhou Wen
Принадлежит: ArcSoft Corporation Limited

The disclosure provides a method and device for three-dimensional reconstruction, applied to the field of image processing. The method includes: obtaining a first depth map, which is photographed by a first photographic device, and obtaining a second depth map, which is photographed by a second photographic device; merging the first depth map with a first three-dimensional model according to a position of the first photographic device to obtain a second three-dimensional model; and merging the second depth map with the second three-dimensional model according to a position of the second photographic device to obtain a third three-dimensional model. 1. A method for measurement , comprising:obtaining a three-dimensional model of a measured object;fitting a pre-stored measured three-dimensional model to the three-dimensional model of the measured object; andmeasuring the three-dimensional model of the measured object according to the pre-stored measured three-dimensional model and the fitting.2. The method according to claim 1 , wherein:the pre-stored measured three-dimensional model comprises feature measurement markers; andmeasuring the three-dimensional model of the measured object according to the pre-stored measured three-dimensional model and the fitting process comprises measuring the three-dimensional model of the measured object according to the feature measurement markers and the fitting.3. The method according to claim 2 , wherein:the measured object is a human body;the feature measurement markers are marking points of the pre-stored measured three-dimensional model;one or more feature measurement markers are located on a body circumference of the pre-stored measured three-dimensional model; and calculating fitted heights of the one or more feature measurement markers after the fitting according to heights of the feature measurement markers on the pre-stored measured three-dimensional model and the fitting;', 'obtaining an envelope curve located on the ...

Подробнее
07-01-2021 дата публикации

Devices and methods for preventing automotive collisions with wildlife

Номер: US20210005086A1
Автор: Cohen Jessica
Принадлежит: Lake of Bays Semiconductor Inc.

Described herein is an autonomous vehicle collision avoidance system designed to detect and deter animals from the road. An automotive ‘deer whistle’ is integrated into an autonomous sensor suite. The sensor suite detects animals using cameras, identifies them and their hearing range using machine learning algorithms, and deters them by emitting animal-specific sound pulses. The system also notifies the driver, may trigger a braking or honking sequence, adjusts subsequent noise emissions based on animal feedback, and collect data on collisions or near collisions for analysis at a central repository. Also described herein is a business model to encourage consumer adoption of the device, wherein the device is distributed to consumers as insurance policy incentive. 1. An automotive collision avoidance system comprised of:a. an ultrasonic pulse emitting device, which is further comprised of a siren waveform generator, an output driver, an electro-acoustic transducer, an amplifier, a processor mounted on a printed circuit board, a controller, networking hardware, a power source, which emits a range of sound frequencies between 10 -100 kHz, in a plurality of pulse patterns,b. and an impact and debris resistant shell,c. machine learning algorithms,d. sensors, which may include infrared cameras, RGB (visible light) cameras, LiDAR sensors, radar sensors, or combination thereof,e. connection to a local computer with a display, such as a driver's smartphone, or the vehicle's onboard computer,f. connection to a remote database, from said device or from the local computer, whereby, the sensors feed image data from the environment to the processor, whereby the machine learning algorithms detect and classify an animal or animals in the device's field of view; whereby upon receipt of instructions from the machine learning algorithm regarding the animal's presence and hearing range, the device initiates a biologically-appropriate ultrasound pulse or pulses, may also initiate an ...

Подробнее
02-01-2020 дата публикации

NORMALIZED METADATA GENERATION DEVICE, OBJECT OCCLUSION DETECTION DEVICE AND METHOD

Номер: US20200005490A1
Принадлежит:

Disclosed a normalized metadata generation device, and object occlusion detection device and method. A normalized metadata generation method includes generating a multi-ellipsoid based three-dimensional human model using perspective features of a plurality of two-dimensional images obtained by the multiple cameras, performing scene calibration based on the three-dimensional human model to normalize object information of the object included in the two-dimensional images, and generating normalized metadata of the object from the two-dimensional images on which the scene calibration is performed. 1. A normalized metadata generation method , the method is performed by a metadata generation device of a multi-camera-based video surveillance system including different kinds of cameras , the method comprising:generating a multi-ellipsoid based three-dimensional (3D) human model using perspective features of a plurality of two-dimensional (2D) images obtained by the multiple cameras;performing scene calibration based on the three-dimensional human model to normalize object information of the object included in the two-dimensional images; andgenerating normalized metadata of the object from the two-dimensional images on which the scene calibration is performed.2. The normalized metadata generation method of claim 1 , wherein the generating of the three-dimensional human model generates a human model having a height from a foot position using three ellipsoids including a head claim 1 , a body claim 1 , and a leg in 3D world coordinates.3. The normalized metadata generation method of claim 2 , wherein the ellipsoid is back-projected onto a two-dimensional space to match an actual object to perform shape matching.4. The normalized metadata generation method of claim 3 , wherein a moving object region is detected by background modeling using a Gaussian mixture model (GMM) and a detected shape is normalized claim 3 , to perform the shape matching.5. The normalized metadata ...

Подробнее
04-01-2018 дата публикации

METHOD FOR COLLECTING AND SHARING LIVE VIDEO FEEDS OF EMPLOYEES WITHIN A DISTRIBUTED WORKFORCE

Номер: US20180005500A1
Принадлежит:

One variation of a method for collecting and sharing substantially real-time video feeds of employees within a distributed workforce includes: distributing a first subset of employee video feeds to a first instance of an employee portal; distributing a second subset of employee video feeds to a second instance of the employee portal; distributing the manager video feed to the first instance and the second instance of the employee portal; distributing the set of employee video feeds to an instance of the manager portal; in response to initiation of a recess for the first employee: replacing the first employee video feed with a recess icon in the second instance of the employee portal and the instance of the manager portal; initiating a timer for the recess; and in response to expiration of the timer, reactivating the first employee video feed. 1. A method comprising:accessing a set of employee video feeds from a set of cameras coupled to employee computing devices executing instances of an employee portal;accessing a manager video feed from a manager camera coupled to a manager computing device executing an instance of a manager portal;distributing a first subset of employee video feeds to a first instance of the employee portal executing on a first employee computing device associated with a first employee, the first subset of employee video feeds comprising a second employee video feed of a second employee and a third employee video feed of a third employee;distributing a second subset of employee video feeds to a second instance of the employee portal executing on a second employee computing device associated with a second employee, the second subset of employee video feeds comprising a first employee video feed of the first employee and the third employee video feed of the third employee;distributing the manager video feed to the first instance of the employee portal and the second instance of the employee portal;distributing the set of employee video feeds to ...

Подробнее
03-01-2019 дата публикации

WIRELESS NETWORK WITH AWARENESS OF HUMAN PRESENCE

Номер: US20190005317A1
Автор: UHLEMANN Stefan
Принадлежит:

Network devices (e.g., a modem, router, wireless user device, laptop, personal digital assistant or other similar wireless network devices) can be configured to monitor and detect a biological presence. In response to determining a biological presence (e.g., a human being or other similar being), a network device can alter parameters related to the generation of radio frequency (RF) energy in order to further ensure or guarantee safety from potential radiation as the number and power of network devices within a certain premises or vicinity increases. 1. An apparatus employed in a modem device , comprising: determine whether a biological presence is within a proximity based on a set of predetermined criteria;', 'modify a set of modem parameters in response to the biological presence being detected;', 'reduce an amount of transmitted radio frequency (RF) energy below a predetermined threshold in response to a modification of the set of modem parameters; and, 'one or more processors configured toa radio frequency interface configured to receive or transmit data over a radio interface.2. The apparatus of claim 1 , wherein the set of predetermined criteria comprise at least one of: an optic differential claim 1 , a motion differential claim 1 , an audio property claim 1 , a received power signal claim 1 , a received communication message claim 1 , an awake message claim 1 , a user activity claim 1 , a temperature differential claim 1 , or a distance of the biological presence associated with the optic differential claim 1 , the motion differential claim 1 , the audio property claim 1 , or the temperature differential.3. The apparatus of claim 1 , wherein the set of modem parameters comprise at least one of: a transmission property of a physical layer claim 1 , a frequency band claim 1 , an amount of airtime claim 1 , a directivity pattern claim 1 , a physical connection claim 1 , a medium of communication or a standard of communication.4. The apparatus of claim 1 , ...

Подробнее
03-01-2019 дата публикации

LIVING BODY DETECTION DEVICE, LIVING BODY DETECTION METHOD, AND RECORDING MEDIUM

Номер: US20190005318A1
Принадлежит:

A living body detection device () includes: an image acquisition unit (), a determination unit () and a detection unit (). The image acquisition unit () acquires a first image in which a subject irradiated by light in a first wavelength range is imaged, and a second image in which the subject irradiated by light in a second wavelength range is imaged, the second wavelength range being different from the first wavelength range. The determination unit () determines whether a relation expressed by luminance of the subject imaged in the first image and luminance of the subject imaged in the second image is a relation exhibited by a living body. The detection unit () detects that the subject is a living body in a case where the determination unit () has determined that it is the relation exhibited by the living body. 113-. (canceled)14. A living body detection device comprising:an illumination unit that comprises multiple light sources which irradiate infrared light in different wavelength ranges, the illumination unit illuminating a subject and a indicator by an infrared light;an imaging unit that captures the subject illuminated by the infrared light; anda memory storing instructions, andat least one processor configured to process the instructions to:acquire the captured images of the subject illuminated by the infrared light; anddetermine whether the subject in the captured images is living body or not based on a relation between luminance of the subject illuminated with the infrared light in different wavelength ranges, wherein obtains luminance of the infrared light detected by the indicator, and', 'determines the wavelength range of the infrared light illuminating the subject based on the luminance of the infrared light., 'the at least one processor15. The living body detection device according to claim 14 , the at least one processor is further configured to:give notice in a case where it has been determined that the subject is not a living body.16. The living ...

Подробнее
03-01-2019 дата публикации

ELECTRONIC DEVICE AND RELEASING METHOD OF IMAGE CAPTURING MODULE THEREOF

Номер: US20190005319A1
Принадлежит: COMPAL ELECTRONICS, INC.

A releasing method of an image capturing module of an electronic device, including an electronic device, wherein the electronic device includes a first machine body and an image capturing module pivotally connected to the first machine body, and the image capturing module is restrained by the first machine body to approach to the first machine body. Performing a posture estimation procedure includes, sensing a first included angle between the first machine body and a gravity direction for determining whether the electronic device is in a tent position. When the electronic device is determined to be in the tent position, determine whether a release instruction is received, so as to decide whether to perform a release procedure, wherein when the release procedure is performed, the first machine body releases the image capturing module, so that the image capturing module is turned up relative to the first machine body. 1. A releasing method of an image capturing module of an electronic device , comprising:providing an electronic device, the electronic device comprising a first machine body and an image capturing module pivotally connected to the first machine body, and the image capturing module being restrained by the first machine body so as to approach the first machine body;executing a posture estimation procedure, wherein the posture estimation procedure comprises sensing a first included angle between the first machine body and a gravity direction for determining whether the electronic device is in a tent position. determining whether the electronic device receives a release instruction when the electronic device is determined to be in the tent position, so as to decide whether to perform a release procedure,wherein the image capturing module is released when the release procedure is performed, so that the image capturing module is turned up relative to the first machine body.2. The releasing method of an image capturing module of an electronic device according ...

Подробнее