Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1005. Отображено 199.
23-02-2006 дата публикации

METHOD AND APPARATUS FOR MACHINE-VISION

Номер: CA0002573728A1
Принадлежит:

Подробнее
12-06-2013 дата публикации

Posture estimation device and posture estimation method

Номер: CN103155003A
Принадлежит:

The present invention is a posture estimation device for estimating a wide variety of 3-dimensional postures by using a skeletal model. The posture estimation device (200) has: a skeletal backbone estimation unit (230) for estimating the position of a feature location of a person within an acquired image; a location extraction unit (240) which generates a likelihood map indicating the certainty that a location other than the feature location of the person exists in the acquired image on the basis of the position of the feature location of the person; and a skeletal model evaluation unit (250) for evaluating, on the basis of the likelihood map, a candidate group which includes a plurality of 2-dimensional skeletal models as candidates and such that each 2-dimensional skeletal model is configured from a line group representing each location and a point group representing coupling between each location and corresponds to one posture of the person.

Подробнее
15-05-2013 дата публикации

Face posture estimating device, face posture estimating method

Номер: CN101952853B
Принадлежит:

Подробнее
11-11-2005 дата публикации

PROCESS OF MEASURE OF LOCATION AND/OR ORIENTATION Of OBJECTS WITH the AVERAGE DEPROCEDES OF IMAGE PROCESSING

Номер: FR0002869983A1
Принадлежит:

L'invention concerne un procédé de mesure de la position et/ou de l'orientation d'objets au moyen d'un procédé de traitement de l'image. Pour cela, on saisit d'abord sur une image d'étalonnage l'ombre propre provenant d'un corps d'étalonnage non plan défini et on déterminer ensuite des paramètres de correction des ombres à l'aide de l'ombre propre déterminée. Pour la mesure de la position et/ou de l'orientation, on place au moins un objet à mesurer dans le champ d'une caméra (1) et dans la zone éclairée par un éclairage (2) pour réaliser une prise de vue de l'objet. La prise de vue de l'objet est ensuite corrigée au moyen des paramètres déterminés et on en déduit la position et/ou l'orientation de l'objet. L'invention permet, sur la base de l'ombre propre d'un objet d'étalonnage (3) connu, de mesurer la position et/ou l'orientation dans l'espace d'objets de manière simple et fiable en utilisant une seule prise de vue. Les lignes de délimitation de l'ombre déformée de l'objet à mesurer provoquée ...

Подробнее
24-02-2012 дата публикации

PORTABLE DEVICE FOR 3D OBJECT DISPLAY CAPABLE OF ACCURATELY DISPLAYING A 3D OBJECT CORRESPONDING TO THE EYES OF A USER AND A METHOD THEREOF

Номер: KR1020120016386A
Автор: LIM, JONG U
Принадлежит:

PURPOSE: A portable device capable of 3D object display and a method thereof are provided to adaptively change and show a stereoscopic feeling of a 3D object along with the eyes of a user. CONSTITUTION: A first writing unit(140) writes face proportion data of a user from outputted face data. A first storage unit(150) stores face proportion data of the user by a direction. A first controller(160) compares the stored face proportion data with data for photographing the user. The first controller monitors the eyes of the user. The first controller generates a 3D object whose loss point is changed according to the eyes of the user. COPYRIGHT KIPO 2012 ...

Подробнее
25-01-2007 дата публикации

JOINT OBJECT POSITION AND POSTURE ESTIMATING DEVICE, ITS METHOD, AND PROGRAM

Номер: WO2007010893A1
Автор: IKEDA, Hiroo
Принадлежит:

A joint object position and posture estimating device enabling reduction of the cost of model fitting computation for estimating the position and posture and having an improved estimation speed. A posture model storage section (2) stores information on a posture model having low-dimension parameters under the movement restraint. The information is determined by analyzing the main components of the time-series postures of the joint object frame model corresponding to a predetermined limited movement of the joint object of a human body or the like. A human body position and posture estimating device (101) has a human model image creating section (4) which creates images of the postures of the joint object frame model within the range of the postures which the posture model takes on and a position and posture estimating section (10) which estimates a posture by matching with the joint object image to be estimated.

Подробнее
07-07-2011 дата публикации

THREE-DIMENSIONAL OBJECT RECOGNITION AND POSTURE ESTIMATION SYSTEM BASED ON PROBABILISTIC MULTIPLE INTERPRETATION AND METHOD THEREOF

Номер: WO2011081459A2
Принадлежит:

The present invention relates to a three-dimensional object recognition and posture estimation system based on generation of probabilistic multiple interpretation and a method thereof, and more specifically, to a three-dimensional object recognition and posture estimation system based on generation of probabilistic multiple interpretation and a method thereof, which can more exactly recognize an object and estimate postures thereof by adding an integrated probability proof to posture candidates of an object generated through a weak recognizer. The invention provides a three-dimensional object recognition and posture estimation system based on generation of probabilistic multiple interpretation and a method thereof, in which posture candidates of an object are generated on the basis of an uncertain proof obtained by using a weak recognizer, and in which the object is more exactly recognized and postures are estimated by adding a supplementary probability proof thereto.

Подробнее
15-09-2016 дата публикации

METHODS AND APPARATUS FOR MODELING DEFORMATIONS OF AN OBJECT

Номер: WO2016145406A1
Принадлежит:

Embodiments can be used to synthesize physically plausible animations of target objects responding to new, previously unseen forces. Knowledge of scene geometry or target material properties is not required, and a basis set for creating realistic synthesized motions can be developed using only input video of the target object. Embodiments can enable new animation and video production techniques.

Подробнее
04-09-2014 дата публикации

AUTOMATIC METHOD OF PREDICTIVE DETERMINATION OF THE POSITION OF THE SKIN

Номер: WO2014132008A1
Принадлежит:

Automatic method of predictive determination of the position of the skin The subject of the present invention is an automatic method of predictive determination of the position and of the displacements of the skin of a subject in a zone of interest, said subject breathing freely or in an assisted manner, said method consisting, in a prior manner, in acquiring several configurations of the profile of the skin in axial planes, doing so at given successive instants, in different respiratory positions. Method characterized in that it consists, during a prior phase, for each axial plane considered, in constructing at least one deformable numerical model on the basis of the various skin profiles, thereafter logging, in a repetitive manner, the actual position of a point of the skin of the subject at the level of each of the aforesaid axial planes, the position of which is modified in a significant manner during the inhaling and exhaling phases, and in providing, substantially in real time, a ...

Подробнее
15-09-2011 дата публикации

INTERPRETATION OF CONSTRAINED OBJECTS IN AUGMENTED REALITY

Номер: WO2011112326A3
Автор: LEUNG, Henry
Принадлежит:

Technologies are generally described for interpretation of constrained objects in augmented reality. An example system may comprise a processor, a memory arranged in communication with the processor, and a display arranged in communication with the processor. An example system may further comprise a sensor arranged in communication with the processor. The sensor may be effective to detect measurement data regarding a constrained object. The sensor may be configured to send the measurement data to the processor. The processor may be effective to receive the measurement data, determine a model for the object, and process the measurement data to produce weighted measurement data. The processor may also be effective to apply a filter to the model and to the weighted measurement data to produce position information regarding the object, which may be utilized to generate an image based on the position information. The display may be effective to display the image.

Подробнее
20-07-2011 дата публикации

Номер: JP0004728432B2
Автор:
Принадлежит:

Подробнее
14-10-2010 дата публикации

METHOD FOR ESTIMATING 3D POSE OF SPECULAR OBJECT

Номер: JP2010231780A
Автор: CHANG JU YOUNG
Принадлежит:

PROBLEM TO BE SOLVED: To estimate a 3D pose of a secular object in a more detailed manner about the estimation of the 3D pose of the object. SOLUTION: A method estimates the 3D pose of the 3D specular object in an environment. In a preprocessing step, a set of pairs of 2D reference images are generated using a 3D model of the object, and a set of poses of the object, wherein each pair of reference images is associated with one of the poses. Then, a pair of 2D input images are acquired of the object. A rough 3D pose of the object is estimated by comparing features in the pair of 2D input images and the features in each pair of 2D reference images using a rough cost function. The rough estimate is refined using a fine cost function. COPYRIGHT: (C)2011,JPO&INPIT ...

Подробнее
08-12-2005 дата публикации

METHOD AND SYSTEM FOR DETECTING AND EVALUATING 3D CHANGES FROM IMAGES AND A 3D REFERENCE MODEL

Номер: CA0002563380A1
Принадлежит:

In a method and system for aligning first and second images with a 3D reference model, the first image is gathered from a first viewpoint, the second image is gathered from a second viewpoint and the first and second images are aligned with the 3D reference model. The image alignment comprises computing prediction error information using the first and second images and the 3D reference model, and minimizing the prediction error. A method and system for detecting and localizing 3D changes in a scene use the above method and system for aligning first and second images with a 3D reference model, determine, in response to the prediction error information and for a model feature of the 3D reference model, whether the prediction error is greater than a selected threshold, and identify the model feature as a 3D change when the prediction error is greater than the selected threshold. Finally, in a method and system for evaluating detected 3D changes, the above method and system for detecting and ...

Подробнее
24-12-2008 дата публикации

Self-position identifying method and device, and three-dimensional shape gauging method and device

Номер: CN0101331379A
Принадлежит:

A step S1 of inputting coordinate values on a three-dimensional shape in a new measurement position, a step S4 of dividing a spatial region where the three-dimensional shape is present into voxels composed of cubes, and making an environment model storing the voxel positions, a matching step S5 of setting/storing a representative point in the voxel corresponding to each coordinate value and the error distribution, and a fine aligning step S7 of rotating and translating the new measured data and the error distribution of the environment model corresponding to the previously measured position or rotating and translating the environment model corresponding to the new measured position, and performing alignment, if any data is present at the previous measurement position, so that the total of the distances between error distributions near to each other may take on a minimum value. From the amount of rotation and the amount of translation at the fine aligning step, the self-position is identified ...

Подробнее
22-10-2014 дата публикации

Method for taking out a workpiece

Номер: CN102164718B
Автор:
Принадлежит:

Подробнее
25-04-2012 дата публикации

Digital imaging system for assays in well plates, gels and blots

Номер: CN0001609593B
Принадлежит:

An electronic imaging system is disclosed, for assessing the intensity of colorimetric, fluorescent or luminescent signal in a matrix consisting of wells, microwells, hybridization dot blots on membranes, gels, or other specimens. The system includes a very sensitive area CCD detector (18), a fast, telecentric lens (22) with epi-illumination (44), a reflective/transmissive illumination system, anillumination wavelength selection device (34), and a light-tight chamber (24). A computer and image analysis software are used to control the hardware, correct and calibrate the images, and detect and quantify targets within the images.

Подробнее
16-11-2012 дата публикации

PROCESS AND SYSTEM FOR IMAGERY BY ULTRASOUNDS WITH IMAGES IN PLAN OF CUT

Номер: FR0002975205A1
Принадлежит: GENERAL ELECTRIC COMPANY

Un système d'imagerie par ultrasons (100) inclut une sonde (105) adaptée pour scanner un volume d'intérêt, un dispositif d'affichage (118), et un processeur (116) en communication électronique avec la sonde (105) et le dispositif d'affichage (118). Le processeur (116) est configuré pour identifier un premier contour dans une première image en plan de coupe et un second contour dans une seconde image en plan de coupe. Le processeur (116) est configuré pour configurer automatiquement des paramètres d'acquisition en fonction de l'un au moins des premier et second contours. Le processeur (116) est configuré pour mettre en œuvre les paramètres d'acquisition pour acquérir des données, générer une image à partir des données, et afficher l'image sur le dispositif d'affichage (118).

Подробнее
07-02-2013 дата публикации

System for alarming a danger coupled with driver-viewing direction, thereof method and vehicle for using the same

Номер: KR0101231510B1
Автор:
Принадлежит:

Подробнее
09-09-2016 дата публикации

METHODS AND APPARATUS FOR MAKING ENVIRONMENTAL MEASUREMENTS AND/OR USING SUCH MEASUREMENTS IN 3D IMAGE RENDERING

Номер: WO2016140934A2
Принадлежит:

Methods and apparatus for making and using environmental measurements are described. Environmental information captured using a variety of devices is processed and combined to generate an environmental model which is communicated to customer playback devices. A UV map which is used for applying, e.g., wrapping, images onto the environmental model is also provided to the playback devices. A playback device uses the environmental model and UV map to render images which are then displayed to a viewer as part of providing a 3D viewing experience. In some embodiments updated environmental model is generated based on more recent environmental measurements, e.g., performed during the event. The updated environmental model and/or difference information for updating the existing model, optionally along with updated UV map(s), is communicated to the playback devices for use in rendering and playback of subsequently received image content. By communicating updated environmental information improved ...

Подробнее
15-02-2007 дата публикации

METHOD AND DEVICE FOR DETERMINING THE ARRANGEMENT OF A VIDEO CAPTURING MEANS IN THE CAPTURE MARK OF AT LEAST ONE THREE-DIMENSIONAL VIRTUAL OBJECT MODELLING AT LEAST ONE REAL OBJECT

Номер: WO2007017597A2
Принадлежит:

The invention relates to a method for determining the arrangement of a video capturing means in the capture mark of at least one virtual object in three dimensions, said at least one virtual object being a modelling corresponding to at least one real object present in images of the video image flows. The inventive method is characterised in that it comprises the following steps: a video image flow is received from the video capturing means; the video image flow received and at least one virtual object flow are displayed; points of said at least one virtual object are paired up, in real-time, with corresponding points in the at least one real object present in images of the video image flows; and the arrangement of said video capturing means is determined according to the points of the at least one virtual object and the paired point thereof in the at least one real object present in the images of the video image flows.

Подробнее
24-01-2013 дата публикации

POSTURE ESTIMATION DEVICE, POSTURE ESTIMATION METHOD, AND POSTURE ESTIMATION PROGRAM

Номер: WO2013011644A1
Принадлежит:

A posture estimation device, which is able to estimate the posture of a humanoid articulated body with high precision, has: a head estimation unit (120) that estimates the position of a person's head from image information for an image that contains a person; a foot estimation unit (130) that, from the image information, estimates the position of the person's foot for which the sole of the foot is parallel to the floor surface; and a posture estimation unit (140) that estimates the posture of the person on the basis of the relative positional relationship between the estimated head position and the estimated foot position. For example, the posture estimation unit (140) estimates the side on which the foot is located, with respect to the head, as the front side of the person.

Подробнее
04-06-2009 дата публикации

IMAGE PROCESSING MODULE FOR ESTIMATING AN OBJECT POSITION OF A SURVEILLANCE OBJECT, METHOD FOR DETECTING A SURVEILLANCE OBJECT AND COMPUTER PROGRAM

Номер: WO2009068336A2
Автор: HEIGL, Stephan
Принадлежит:

Video surveillance systems are used to observe oftentimes large-area, broken or complex surveillance areas. The image data streams recorded with the surveillance camera are usually brought together in a surveillance center or the like and are either automated there or controlled by surveillance personnel. The invention relates to an image processing module (3) for estimating an object position of a surveillance object or subareas thereof in a surveillance area for a surveillance system (1) for surveilling at least said surveillance area by means of at least one surveillance camera (2). Said image processing module comprises a model input interface (6) for receiving a model or partial model of the surveillance area (referred to in the following as model), a camera input interface (7) for receiving a camera model of the surveillance camera (2), an object input interface for receiving an object point of the surveillance object, said object points being determined based on a surveillance image ...

Подробнее
31-07-2014 дата публикации

METHOD AND APPARATUS FOR CALCULATING THE CONTACT POSITION OF AN ULTRASOUND PROBE ON A HEAD

Номер: WO2014114327A1
Принадлежит:

A data processing method for calculating the contact position of a medical ultrasound transceiver on the head of a patient, comprising the steps of: a) acquiring ROI data which represent a region of interest (ROI) corresponding to at least a part of a vessel in a vascular structure; b) acquiring contact region data which represent a contact region for the ultrasound transceiver on the head, wherein the contact region corresponds to one or more acoustic windows; c) determining at least one target point in the region of interest; d) determining at least two entry points on the contact region; e) calculating a set of lines which comprises the lines between the two points of each respective possible pair consisting of one entry point and one target point; f) eliminating lines which pass through a bony structure other than the bone immediately beneath the contact region; g) calculating a score for each of the remaining lines; and h) selecting the entry point of the line with the highest score ...

Подробнее
21-01-2010 дата публикации

METHOD AND APPARATUS FOR IMAGING OF FEATURES ON A SUBSTRATE

Номер: WO2010006727A1
Принадлежит:

A method for imaging features on a substrate, comprising scanning the substrate and producing an image thereof, overlaying a grid model on the image, fitting the grid model to the locations of at least some of the features on the image, and extracting images of the features.

Подробнее
18-02-2010 дата публикации

METHOD AND APPARATUS FOR ESTIMATING BODY SHAPE

Номер: WO2010019925A1
Принадлежит:

A system and method of estimating the body shape of an individual from input data such as images or range maps. The body may appear in one or more poses captured at different times and a consistent body shape is computed for all poses. The body may appear in minimal tight-fitting clothing or in normal clothing wherein the described method produces an estimate of the body shape under the clothing. Clothed or bare regions of the body are detected via image classification and the fitting method is adapted to treat each region differently. Body shapes are represented parametrically and are matched to other bodies based on shape similarity and other features. Standard measurements are extracted using parametric or non-parametric functions of body shape. The system components support many applications in body scanning, advertising, social networking, collaborative filtering and Internet clothing shopping.

Подробнее
23-01-2013 дата публикации

Method of modelling buildings from a georeferenced image

Номер: EP2549434A2
Принадлежит:

La présente invention concerne un procédé de modélisation d'objets réels représentés dans une image de la surface terrestre, en particulier de bâtiments, à partir d'une image référencée géographiquement, l'image étant produite par un capteur aérien ou spatial associé à un modèle physique de prise de vue, le procédé étant comprenant au moins les étapes suivantes : ■ choisir un modèle paramétrique de la surface externe dudit objet réel (101); ■ pour plusieurs jeux de paramètres dudit modèle : o projeter (103) le modèle paramétré dans l'image en appliquant le modèle physique de prise de vue; o évaluer l'adéquation (104) entre le modèle projeté et les caractéristiques radiométriques de l'image ; ■ déterminer les paramètres du modèle pour lesquels l'adéquation est la meilleure (106) pour modéliser ledit objet avec ces paramètres. L'invention s'applique notamment à la télédétection, la géographie numérique, à la constitution de bases de données 3D urbaines ou leur mise à jour.

Подробнее
30-09-2009 дата публикации

FACIAL IMAGE PROCESSING SYSTEM

Номер: EP1320830B1
Принадлежит: Seeing Machines Pty Ltd

Подробнее
02-11-2011 дата публикации

Method for estimating 3D pose of specular objects

Номер: EP2234064B1
Автор: Chang, Ju Young
Принадлежит: Mitsubishi Electric Corporation

Подробнее
16-05-2012 дата публикации

OBJECT LOCALIZATION IN X-RAY IMAGES

Номер: EP2271264B1
Принадлежит: Koninklijke Philips Electronics N.V.

Подробнее
14-05-2003 дата публикации

THREE-DIMENSIONAL POSITION AND POSTURE DECISION METHOD OF DETECTION TARGET OBJECT AND VISUAL SENSOR OF ROBOT

Номер: JP2003136465A
Принадлежит:

PROBLEM TO BE SOLVED: To eliminate needs to develop a detection algorithm for each target object when calculating a three-dimensional position and a posture 3 of the target object. SOLUTION: In a target object, a distinctive part of relatively simple shape, such as a cylinder, is captured and its central axis is determined by a three- dimension sensor. A camera mounted on a wrist of a manipulator is guided on the central axis to pickup a two-dimensional image of the target object. The target object in this two-dimensional image is compared with a model to detect the rotation angle, and the three-dimensional position of the detection target object is calculated from the rotation angle and an operation information of the manipulator. COPYRIGHT: (C)2003,JPO ...

Подробнее
14-05-2014 дата публикации

Номер: JP0005488548B2
Автор:
Принадлежит:

Подробнее
10-02-2003 дата публикации

POSE ESTIMATION METHOD AND APPARATUS

Номер: CA0002397237A1
Автор: ISHIYAMA, RUI
Принадлежит:

A three-dimensional image data is formulated and saved in a memory for indicating a three-dimensional shape of an object and reflectivity or color at every point of the object. For each of multiple pose candidates, an image space is created for representing brightness values of a set of two- dimensional images of the object which is placed in the same position and orientation as the each pose candidate. The brightness values are those which would be obtained if the object is illuminated under varying lighting conditions. For each pose candidate, an image candidate is detected within the image space using the 3D model data and a distance from the image candidate to an input image is determined. Corresponding to the image candidate whose distance is smallest, one of the pose candidates is selected. The image space is preferably created from each of a set of pose variants of each pose candidate.

Подробнее
20-08-2014 дата публикации

Method and device for estimating a pose

Номер: CN103999126A
Принадлежит:

The invention relates to a real time-capable analysis of a sequence of electronic images for estimating the pose of a movable object captured by means of the images. The invention further relates to implementing the invention in software and, in this connection, to a computer-readable medium that stores commands, the execution of which causes the method according to the invention to be carried out. The invention proceeds from a skeleton model, which is described by a small number of nodes in 3D space and permits a good data compression of the image information when the co-ordinates of the nodes describe at any time the position of predetermined parts of the moving object. The skeleton model simultaneously represents previous knowledge of the object, by defining e.g. node pairs and optionally also node triplets in the skeleton model that describe cohesive object parts or optionally object surfaces, which are contained in the measured 2 1/2 -D image information, i.e. are visible to the camera ...

Подробнее
25-01-2013 дата публикации

Method for modeling building represented in geographically-referenced image of terrestrial surface for e.g. teledetection, involves determining parameters of model, for which adequacy is best, from optimal parameters for modeling object

Номер: FR0002978276A1
Принадлежит: THALES

La présente invention concerne un procédé de modélisation d'objets réels représentés dans une image de la surface terrestre, en particulier de bâtiments, à partir d'une image référencée géographiquement, l'image étant produite par un capteur aérien ou spatial associé à un modèle physique de prise de vue, le procédé étant comprenant au moins les étapes suivantes : ▪ choisir un modèle paramétrique de la surface externe dudit objet réel (101); ▪ pour plusieurs jeux de paramètres dudit modèle : O projeter (103) le modèle paramétré dans l'image en appliquant le modèle physique de prise de vue; O évaluer l'adéquation (104) entre le modèle projeté et les caractéristiques radiométriques de l'image ; ▪ déterminer les paramètres du modèle pour lesquels l'adéquation est la meilleure (106) pour modéliser ledit objet avec ces paramètres. L'invention s'applique notamment à la télédétection, la géographie numérique, à la constitution de bases de données 3D urbaines ou leur mise à jour.

Подробнее
13-03-2013 дата публикации

METHOD AND AN APPARATUS FOR PRODUCING A MEDICAL IMAGE USING A PARTIAL MEDICAL IMAGE

Номер: KR1020130026041A
Принадлежит:

PURPOSE: A method and an apparatus for producing a medical image using a partial medical image are provided to easily find the position of an organ. CONSTITUTION: An organ image system includes an image detecting device(10), an image registration apparatus(20) and an image display device(30). A probe(11) is mounted on the image detecting device. A source signal generated from the probe is delivered to the specific part of the patient body. An image detecting device detects a three dimensional image by using ultrasound. COPYRIGHT KIPO 2013 [Reference numerals] (AA) Patient ...

Подробнее
23-03-2011 дата публикации

REGISTRATION OF STREET-LEVEL IMAGERY TO 3D BUILDING MODELS

Номер: KR1020110030641A
Автор:
Принадлежит:

Подробнее
19-08-2014 дата публикации

METHOD AND APPARATUS FOR ESTIMATING A POSE

Номер: KR1020140101439A
Автор:
Принадлежит:

Подробнее
04-11-2010 дата публикации

ULTRASOUND SYSTEM AND METHOD FOR ALIGNING ULTRASOUND IMAGES, CAPABLE OF ALIGNING THREE-DIMENSIONAL ULTRASOUND IMAGES IN A PRE-SET POSITION

Номер: KR1020100117698A
Принадлежит:

PURPOSE: Ultrasound system and method for aligning ultrasound images are provided to improve the convenience of a user by providing aligned three-dimensional images in a pre-set position regardless of the position change of an object. CONSTITUTION: A user instruction for selecting an interested object to a user inputting part(110). Information related to a reference position corresponding to a plurality of interested objects is stored in a storing part(120). An ultrasound-wave data acquiring part(130) acquires the ultrasound data of the object. A volume data forming part(140) forms volume data using the ultrasound data. A processor(150) forms a three-dimensional ultrasound image using the volume data. The processor extracts the reference position information from the storing part and aligns the three-dimensional ultrasound image. COPYRIGHT KIPO 2011 ...

Подробнее
19-12-2013 дата публикации

ACCELERATED GEOMETRIC SHAPE DETECTION AND ACCURATE POSE TRACKING

Номер: WO2013188309A1
Принадлежит:

A reference in an unknown environment is generated on the fly for positioning and tracking. The reference is produced in a top down process by capturing an image of a planar object with a predefined geometric shape, detecting edge pixels of the planar object, then detecting a plurality of line segments from the edge pixels. The plurality of line segments may then be used to detect the planar object in the image based on the predefined geometric shape. An initial pose of the camera with respect to the planar object is determined and tracked using the edges of the planar object.

Подробнее
19-01-2012 дата публикации

METHOD AND SYSTEM FOR DETERMINING AN IMAGING DIRECTION AND CALIBRATION OF AN IMAGING APPARATUS

Номер: WO2012007036A1
Автор: FEILKAS, Thomas
Принадлежит:

The present invention relates to a method for determining an imaging direction of an imaging apparatus (10), such as an x-ray apparatus, with a radiation source or an imaging source (12) that emits an imaging beam (14) to an imaging detector (16) along a beam path, comprising the steps of imaging an object (18) from a first direction to obtain a first 2D image; providing 3D reference data, for example a generic or statistical 3D model or an earlier obtained 3D data set, of the imaged object (18); performing a 2D/3D matching of the first 2D image with the 3D reference data to determine a position of an imaging plane (20, 22, 24) of the first 2D image relative to the 3D reference data; and determining the imaging direction of the imaging apparatus (10) relative to the object (18) based on the position of the imaging plane (20, 22, 24) relative to the 3D reference data, as well as to a navigation system for computer-assisted surgery comprising the imaging system of the preceding claim; a tracking ...

Подробнее
05-04-2001 дата публикации

A SYSTEM AND METHOD FOR ESTIMATING THE ORIENTATION OF AN OBJECT

Номер: WO0000122872A3
Автор: TOYAMA, Kentaro, WU, Ying
Принадлежит:

The present invention is embodied in a system and method for automatically estimating the orientation or pose of an object (230), such as a human head (416), from any viewpoint and includes training and pose estimation modules (210, 212). The training module (210) uses known head poses (220) for generating observations (222) of the different types of head poses and the pose estimation module (212) receives actual head poses (232) of a subject and uses the training observations (222) to estimate (236) the actual head pose. Namely, the training module (210) receives training data (220) and extracts unique features (222) of the data, projects (224) the features onto corresponding points of a model and determines a probability density function estimation (226) for each model point to produce a trained model. The pose estimation module (212) receives the trained model (210) and an input features (230) and extracts unique input features (232) of the input object (230), projects the input features ...

Подробнее
03-04-2014 дата публикации

IMAGE PROCESSING METHOD, PARTICULARLY USED IN A VISION-BASED LOCALIZATION OF A DEVICE

Номер: WO2014048590A1
Автор: MEIER, Peter
Принадлежит:

An image processing method comprises the steps of providing at least one image of at least one object or part of the at least one object, and providing a coordinate system in relation to the image, providing at least one degree of freedom in the coordinate system or at least one sensor data in the coordinate system, and computing image data of the at least one image or at least one part of the at least one image constrained or aligned by the at least one degree of freedom or the at least one sensor data.

Подробнее
20-02-2003 дата публикации

HIERARCHICAL IMAGE MODEL ADAPTATION

Номер: WO0003015010A1
Принадлежит:

The invention relates to a method for processing digitized image data by adapting image adaptation models. Said method involves: furnishing a hierarchical structure graph with nodes respectively representing at least one parametrized image adaptation model; a predetermined amount of superimposed planes, wherein at least one node is located on each plane; edges connecting pairwise predetermined nodes of different planes and defining a father node for each node pair as the node in the lower plane and a son node as the node in the upper plane; applying the structure graphs on the image data by processing at least one node beginning with the lowest plane, wherein processing of a node involves the following steps: adapting its at least one image adaptation model to the image data by varying model parameters; determining a degree of adaptation for every parameter variation as a measure of the quality of image adaptation and determining an evaluation for each parameter variation taking into account ...

Подробнее
16-02-2016 дата публикации

Point-of-gaze detection device, point-of-gaze detecting method, personal parameter calculating device, personal parameter calculating method, program, and computer-readable storage medium

Номер: US0009262680B2
Принадлежит: JAPAN SCIENCE AND TECHNOLOGY AGENCY

A point-of-gaze detection device according to the present invention detects a point-of-gaze of a subject toward a surrounding environment. The device includes: an eyeball image obtaining means configured to obtain an eyeball image of the subject; a reflection point estimating means configured to estimate a first reflection point, at which incoming light in an optical axis direction of an eyeball of the subject is reflected, from the eyeball image; a corrected reflection point calculating means configured to calculate a corrected reflection point as a corrected first reflection point by correcting the first reflection point on the basis of a personal parameter indicative of a difference between a gaze direction of the subject and the optical axis direction of the eyeball; and a point-of-gaze detecting means configured to detect the point-of-gaze on the basis of light at the corrected reflection point and light in the surrounding environment.

Подробнее
26-12-2002 дата публикации

Apparatus and method for labeling rows and columns in an irregular array

Номер: US20020198677A1
Принадлежит:

The apparatus and method of the invention provide for assigning coordinates to samples in an array. The method is based on a hierarchical pattern matching to a local lattice structure that is used as a template. Starting from the best local match, the pattern is expanded hierarchically to encompass the entire array.

Подробнее
19-11-2008 дата публикации

Method and system for determining angular position of an object

Номер: EP1992908A2
Принадлежит:

A method and system for determining the angular position of an object are disclosed. As one example, a method for determining the angular position of an object is disclosed, which includes the steps of obtaining data representing an image of a surface area of the object, the image including a plurality of dots, determining a set of coordinates for each dot of the plurality of dots, selecting a predetermined number of dots of the plurality of dots, comparing the predetermined number of dots with a plurality of dots stored in a dot map, responsive to the comparing step, if the predetermined number of dots substantially match the plurality of dots stored in the dot map, selecting the predetermined number of dots, and forming a coordinate transformation matrix representing a transformation from a coordinate frame associated with a position of the predetermined number of dots at the surface of the object to a coordinate frame associated with a position of the image of the surface area of the ...

Подробнее
30-09-2009 дата публикации

Method of radiographic imaging for three-dimensional reconstruction, device and computer program for carrying out said method

Номер: EP1788525A3
Принадлежит:

Procédé d'imagerie radiographique pour la reconstruction tridimensionnelle dans lequel on calcule la forme à trois dimensions d'un modèle représentant l'objet, à partir d'un modèle géométrique connu a priori de l'objet, obtenu depuis un volume de confinement de l'objet, estimé à partir d'un motif géométrique visible sur deux images, et de la position de la source. On utilise un modèle géométrique comprenant des informations permettant d'établir, depuis un estimateur de l'objet une caractéristique géométrique pour le modèle représentant l'objet.

Подробнее
25-12-2013 дата публикации

Номер: JP0005378374B2
Автор:
Принадлежит:

Подробнее
25-01-2002 дата публикации

OBJECT MOVEMENT TRACKING TECHNIQUE AND RECORDING MEDIUM

Номер: JP2002024807A
Принадлежит:

PROBLEM TO BE SOLVED: To provide a technique for tracking, at high speed by using an object model, three-dimensional rigid body motion of a free curved-surface object formed using a smooth curved-surface by using a stereo camera system as a sensor. SOLUTION: A stereo image is inputted (S1), and a tracking point, corresponding to a contour line (silhouette) of the curved-surface object observed by the stereo image, is selected by using a three-dimensional geometric model, based on information on the present position of the object (S2). Corresponding points on the contour line, corresponding to each tracking point are extracted from the stereo image (S3), and three-dimensional coordinates thereof, are measured (S4). Then, position attitude of the object and an error are calculated from pairs of the three-dimensional coordinates between each tracking point and each corresponding point (S5). The error is discriminated (S6), and when the error is not sufficiently small, the detected position ...

Подробнее
26-03-2014 дата публикации

Номер: JP0005450619B2
Автор:
Принадлежит:

Подробнее
19-08-2010 дата публикации

IMAGE PROCESSING APPARATUS FOR DETECTING COORDINATE POSITION OF CHARACTERISTIC PART OF FACE

Номер: JP2010182150A
Принадлежит:

PROBLEM TO BE SOLVED: To improve the efficiency the processing of detecting the position of the characteristic section of the face included in an image, and also accelerate the speed thereof. SOLUTION: The image processor for detecting the coordinate position of the characteristic section of the face included in an attentional image includes a facial area detection section for detecting an image area including at least a section of a face image from the attentional image as a facial area; a setting section for setting a characteristic point for detecting the coordinate position of the characteristic section to the attentional image, on the basis of the facial area; a selection section for selecting a characteristic amount to be used for correcting the setting position of the characteristic point from the plurality of characteristic amounts calculated, on the basis of a plurality of sample images including the face image for which the coordinate position of the characteristic section is ...

Подробнее
29-08-2008 дата публикации

Three-dimensional object e.g. human face, position determining method for creating key frame, involves determining position of object in image from position information associated to selected two-dimensional representation

Номер: FR0002913128A1
Принадлежит:

L'invention a pour objet un procédé et un dispositif de détermination de la pose d'un objet tridimensionnel dans une image, caractérisé en ce qu'il comprend les étapes suivantes : acquisition d'un modèle générique tridimensionnel de l'objet, projection du modèle générique tridimensionnel selon au moins une représentation en deux dimensions et association à chaque représentation en deux dimensions d'une information de pose de l'objet tridimensionnel, sélection et positionnement d'une représentation en deux dimensions sur l'objet dans ladite image, et détermination de la pose tridimensionnelle de l'objet dans l'image à partir au moins de l'information de pose associée à la représentation en deux dimensions sélectionnée.

Подробнее
12-09-2014 дата публикации

ENDOSCOPE SYSTEM AND ENDOSCOPE SYSTEM OPERATION METHOD

Номер: WO2014136579A1
Принадлежит:

An endoscope system (1) comprises: an insertion part (2b) which is inserted in a subject; an object optical viewport (11a) which is disposed on the leading end side of the insertion part (2) and which receives light from the subject; an image capture element (11) which captures an image of the interior of the subject; a location-direction detection unit (25) which acquires location information of the object optical viewport (11a); and a memory (22) which associates the image of the interior of the subject which is acquired by the image capture element (11) with the location information of the object optical viewport (11a) and records same. On the basis of a degree of change of the information of the image of the interior of the subject in the interior of the subject, etc., the endoscope system (1) aligns the location information of the object optical viewport (11a) with the location in a coordinate system of a prescribed organ model image in the interior of the subject, and generates an ...

Подробнее
05-05-2011 дата публикации

METHOD FOR DETERMINING THE ORIENTATION OF AN UPPER PART OF A STACK OF PIECE GOODS

Номер: WO2011050383A2
Автор: LUSTIG, Stefan
Принадлежит:

The invention relates to a method for determining the orientation of an upper part (1) of a stack (2) of piece goods, in particular of a stack of metal sheets. An image of the upper cover layer of the stack (2) of piece goods is acquired by an image acquisition device (6), and a relative orientation of the upper cover layer is determined therefrom by an image evaluation module. One image (15, 16) each of a first (7) and a second (8) position of the stack (2) of piece goods is acquired, and an image analysis is performed for each image (15, 16) by an image analysis and comparison module (13). A contour (17) is extracted and a first (18) and second (19) piece of position information are determined. A distance (21) is determined from the first (18) and second (19) pieces of position information, wherein a part model (22) is created from the extracted contours (17) and the distance (21) by the image analysis and comparison module (13). The part model is compared with a stored reference model ...

Подробнее
15-05-2014 дата публикации

METHOD FOR DETECTING IMAGE-BASED INDOOR POSITION, AND MOBILE TERMINAL USING SAME

Номер: WO2014073841A1
Принадлежит:

The present invention relates to a method for detecting an image-based indoor position and to a mobile terminal using same, and in particular, to a method for detecting an image-based indoor position and to a mobile terminal using same which are capable of detecting an indoor position of a user only using an image and an indoor map without a separate image database. To this end, the method for detecting an image-based indoor position according to the present invention, includes the steps of: (a) obtaining images for one or more directions of the user using a camera embedded in a user terminal; (b) extracting features inside a building from the images corresponding to the obtained user directions; (d) matching the user directions and the extracted features of the image with indoor map information about the building; (e) estimating a position of the user terminal through the image and map matching process.

Подробнее
01-09-2011 дата публикации

THREE-DIMENSIONAL MEASUREMENT APPARATUS, MODEL GENERATION APPARATUS, PROCESSING METHOD THEREOF, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: WO2011105616A1
Принадлежит:

A three-dimensional measurement apparatus generates a plurality of view-point images obtained by observing a measurement object from a plurality of different view-points using a three-dimensional geometric model, detects edges of the measurement object from the plurality of view-point images as second edges, calculates respective reliabilities of first edges of the three-dimensional geometric model based on a result obtained when the second edges are associated with the first edges, weights each of the first edges based on the respective reliabilities, associates third edges detected from a captured image with the weighted first edges, and calculates a position and an orientation of the measurement object based on the association result.

Подробнее
22-12-2011 дата публикации

A TARGET LOCATING METHOD AND A TARGET LOCATING SYSTEM

Номер: WO2011159206A1
Принадлежит:

The present invention relates to a target locating method and a target locating system. The invention solves the problem to determine coordinates of targets at long distances with high accuracy. According to the invention this is solved by recording (20) images of a target area by means of recording devices carried by a vehicle, matching (23) the recorded images of the target area with a corresponding three dimensional area of a three dimensional map (22) comprising transferring a target indicator, such as a reticle, from the recorded images of the target area to the three dimensional map of the corresponding target area (25), reading the coordinates (26) of the target indicator position in the three dimensional map, and making (27) the read coordinates of the target indicator position available for position requiring equipment (28).

Подробнее
22-07-2010 дата публикации

POSITION ESTIMATION REFINEMENT

Номер: WO2010082933A1
Принадлежит:

Methods, systems, and apparatus, including computer program products, for aligning images are disclosed. In one aspect, a method includes receiving an inaccurate three dimensional (3D) position of a physical camera, where the physical camera captured a photographic image; basing an initial 3D position of a virtual camera in a 3D virtual environment on the inaccurate 3D position of the physical camera; correlating one or more markers in the photographic image with one or more markers in the 3D virtual environment that appear in the virtual camera's field of view; and adjusting the initial 3D position of the virtual camera in the 3D virtual environment based on a disparity between the one or more markers' 3D positions in the photographic image as compared to the one or more markers' 3D positions in the virtual camera's field of view.

Подробнее
11-01-2007 дата публикации

METHOD OF CONTROLLING A SYSTEM

Номер: WO000002007004134A3
Принадлежит:

The invention describes a method of controlling a system (1) comprising one or more components (C1, C2, ..., Cn), which method comprises the steps of aiming a pointing device (2) comprising a camera (3) in the direction of one or more of the components (C1, C2, ..., Cn), generating image data (4) of a target area A() aimed at by the pointing device (2), encompassing at least part of one or more of the components (C1, C2, ..., Cn), and analysing the image data (4) to determine position information (P) pertaining to the position of the user (5) relative to one or more of the components (C1, C2, ..., Cn) at which the pointing device (2) is being aimed and/or to relative positions of the components. The system (1) is subsequently controlled according to the position information (P). Furthermore, the invention describes a corresponding control system (10), a home entertainment system, and a lighting system. The invention further describes a method of acquiring dimensional data (D) for use in ...

Подробнее
03-03-2016 дата публикации

DEVICE AND METHOD FOR THE 3D VIDEO MONITORING OF OBJECTS OF INTEREST

Номер: US20160065904A1
Автор: Yoann DHOME, Patrick SAYD
Принадлежит:

A device and a method for assisting security in the 3D tracking of objects of interest are provided. A proposed risk propagation module makes it possible to create kinship links between the analyzed tracks, during interactions or during disappearance/reappearance of tracks, thus making it possible to diffuse the highest risks to each track concerned. 1. A method for assisting security in the 3D tracking of objects of interest , the method comprising the following steps:calibrating with respect to a common reference an assembly of detectors of a tracking system for a space to be monitored;defining, for the monitored space, at least one competence zone; generating, for the competence zone, a list of tracks, each track comprising, for a tracked object of interest, a label identifying the tracked object of interest and a series of positions provided by the detectors;detecting interactions between the tracks over the competence zone; andcreating a kinship label on the basis of the labels identifying each object of interest tracked for the tracks detected during interaction.2. The method as claimed in claim 1 , in which the competence zone is associated with an entrance-exit zone of the monitored space and with one or more overlap zones.3. The method as claimed in claim 1 , moreover comprising a step of defining the interactions between tracks.4. The method as claimed in claim 3 , in which the interactions between tracks are defined by a proximity parameter.5. The method as claimed in claim 3 , in which the interactions between tracks are defined by a masking parameter.6. The method as claimed in claim 1 , in which the step of detecting interactions consists in determining the repeated presence of a track in the ground projection ellipse of another track and the repeated masking between these same tracks.7. The method as claimed in claim 1 , in which the detectors are of the radiometric sensor claim 1 , metallic detector claim 1 , explosive trace detector claim 1 , ...

Подробнее
13-08-2015 дата публикации

MANUFACTURING LINE MONITORING

Номер: US20150228078A1
Автор: Brannon Zahand
Принадлежит: Microsoft Corporation

Systems and method for monitoring a workstation region of a manufacturing line are provided. In one example, depth image data is received from one or more depth cameras trained on the workstation region, with the data comprising a temporal sequence of images of an operator. Using the depth image data, a series of movements of the operator is tracked in 3D space of the workstation region. Operational status data is received from the manufacturing line indicating the manufacturing line is operating. Using the series of movements, the operator is determined to be within a predetermined distance of a hazard. In response, a command is issued to the manufacturing line to cease operating.

Подробнее
31-05-2006 дата публикации

Object picking system

Номер: EP0001589483A3
Принадлежит:

An object picking system for picking up, one by one, a plurality of objects. The system includes a detecting section detecting, as an image, an object to be picked, among a plurality of objects placed in a manner as to be at least partially superimposed on each other; a storage section storing appearance information of a predetermined portion of a reference object having an outward appearance identical to an outward appearance of the object to be picked; a determining section determining, in the image of the object to be picked detected by the detecting section, whether an inspected portion of the object to be picked, corresponding to the predetermined portion of the reference object, is concealed by another object, based on the appearance information of the reference object stored in the storage section; a control section deciding a picking motion for the object to be picked and outputting a control signal of the picking motion, based on a determination result of the determining section ...

Подробнее
09-12-2009 дата публикации

Object picking system

Номер: EP1589483B1
Принадлежит: FANUC LTD

Подробнее
09-06-2009 дата публикации

POSE ESTIMATION METHOD AND APPARATUS

Номер: CA0002397237C
Автор: ISHIYAMA, RUI
Принадлежит: NEC CORPORATION

A three-dimensional image data is formulated and saved in a memory for indicating a three-dimensional shape of an object and reflectivity or color at every point of the object. For each of multiple pose candidates, an image space is created for representing brightness values of a set of two- dimensional images of the object which is placed in the same position and orientation as the each pose candidate. The brightness values are those which would be obtained if the object is illuminated under varying lighting conditions. For each pose candidate, an image candidate is detected within the image space using the 3D model data and a distance from the image candidate to an input image is determined. Corresponding to the image candidate whose distance is smallest, one of the pose candidates is selected. The image space is preferably created from each of a set of pose variants of each pose candidate.

Подробнее
22-06-2011 дата публикации

Control method, control device and entertainment system and lighting system including control device

Номер: CN0101213506B
Принадлежит:

The invention describes a method of controlling a system (1) comprising one or more components (C1, C2, ..., Cn), which method comprises the steps of aiming a pointing device (2) comprising a camera (3) in the direction of one or more of the components (C1, C2, ..., Cn), generating image data (4) of a target area A() aimed at by the pointing device (2), encompassing at least part of one or more of the components (C1, C2, ..., Cn), and analyzing the image data (4) to determine position information (P) pertaining to the position of the user (5) relative to one or more of the components (C1, C2,..., Cn) at which the pointing device (2) is being aimed and/or to relative positions of the components. The system (1) is subsequently controlled according to the position information. Furthermore, the invention describes a corresponding control system (10), a home entertainment system, and a lighting system. The invention further describes a method of acquiring dimensional data (D) for use in the ...

Подробнее
04-06-2014 дата публикации

Method and apparatus for imaging of features on a substrate

Номер: CN102138160B
Принадлежит:

Подробнее
23-08-2013 дата публикации

PROCESS OF MODELING OF BUILDINGS STARTING FROM AN IMAGE GEOREFERENCEE

Номер: FR0002978276B1
Принадлежит: THALES

Подробнее
30-12-1988 дата публикации

SYSTEME DE CALIBRAGE INTEGRE PAR ORDINATEUR

Номер: FR0002617306A
Принадлежит:

Le système fonctionne pour comparer des modèles tridimensionnels de calibres de contrôle construits à partir de données de conception assistée par ordinateur CAO pour une pièce manufacturée et de libellés normalisés de dimension géométrique et de tolérance à des modèles tridimensionnels construits à partir de données de contrôle obtenues à partir de la pièce manufacturée 17. La comparaison est faite graphiquement et mathématiquement. Les pièces sont jugées dans les tolérances ou hors tolérances. Si elles sont hors tolérances, elles peuvent être jugées rectifiables ou à éliminer. En outre, le système est capable de déterminer la correction de syntaxe pour les normes de tolérance, de définir la séquence d'étapes pour un travail spécifique avant l'exécution de celui-ci, de réaliser des analyses de conformité aux tolérances d'une pièce individuelle et des analyses statistiques de tolérance de pièce pour plusieurs pièces, des analyses de tolérances pour des pièces conjuguées, et la génération ...

Подробнее
11-06-2008 дата публикации

SYSTEM AND A METHOD FOR ACCURATELY ANALYZING THE ACTION OF A TARGET OBJECT FROM MOVING PICTURE INFORMATION OF THE TARGET OBJECT

Номер: KR1020080051956A
Принадлежит:

PURPOSE: A system and a method for analyzing the action of a target object on the basis of silhouettes of a real-time moving picture are provided to classify specific poses of the target object more accurately from moving picture information of the target object and analyze action according to the classified poses more accurately. CONSTITUTION: A foreground detecting unit(130) detects a moved foreground object except for a background image from an inputted image. A contour extracting unit(150) extracts an external silhouette contour for the detected foreground object. A model generating unit(180) generates an average value histogram model which is standard for discriminating the action of an object from a silhouette image inputted in real time. A corner histogram generating unit(160) generates a corner histogram of corner points of a hierarchical multi-band for the extracted contour signal. A similarity measuring unit(170) calculates the similarity of the generated corner histogram of the ...

Подробнее
22-08-2012 дата публикации

HUMAN TRACKING SYSTEM

Номер: KR1020120093197A
Автор:
Принадлежит:

Подробнее
05-06-2008 дата публикации

APPARATUS FOR DETERMINING A POSITION OF A FIRST OBJECT WITHIN A SECOND OBJECT

Номер: WO2008065581A2
Принадлежит:

The present invention relates to an apparatus for determining a position of a first object (14) within a second object (13), wherein the first object (14) contacts the second object (13) at a contact region. The apparatus (1) comprises a provision unit (2) for providing a three-dimensional model (20) of the second object (13). A projection unit (5, 6, 7, 21) generates a two-dimensional projection image (26) of the first object (14) and of the second object (13), and a registration unit (8) registers the three-dimensional model (20) with the two-dimensional projection image (26). A determination unit (9) determines the position of the contact region from the position of the first object (14) on the two-dimensional projection image (26) and the registered three-dimensional model (20) of the second object (13), wherein the position of the contact region is the position of the first object (14) within the second object (13).

Подробнее
08-01-2015 дата публикации

METHOD AND APPARATUS FOR DETERMINING POSE OF OBJECT IN SCENE

Номер: WO2015002114A1
Принадлежит:

A method for determining a pose of an object in a scene by determining a set of scene features from data acquired of the scene and matching the scene features to model features to generate weighted candiate poses when the scene feature matches one of the model features, wherein the weight of the candidate pose is proportional to the model weight. Then, the pose of the object is determined from the candidate poses based on the weights.

Подробнее
19-06-2008 дата публикации

METHOD AND SYSTEM FOR GAZE ESTIMATION

Номер: WO2008073563A1
Принадлежит:

A method and system, the method including capturing a video sequence of images with an image capturing system, designating at least one landmark in a region of interest of the captured video sequence, fitting a model of the region of interest to the region of interest in the captured video sequence, and determining a pose parameter for the model fitted to the region of interest.

Подробнее
21-11-2013 дата публикации

COMPUTER BASED PARAMETRIC NAVIGATION METHOD RELATED TO CIRCULAR FIXATOR APPLICATION

Номер: WO2013172800A1
Автор: ŞΕΗΜUΖ, lşin
Принадлежит:

This invention is related to a Computer Based External Circular Fixator Application that provides getting the radioscopic images of parameters about fixator and bone parameters that is wanted to be recovered, calculating the required rod length values for coordinates on which the rings placed on by a calculating software and a computer based external circular fixator application that provides the bones to form into deserved shape.

Подробнее
08-08-2013 дата публикации

IMAGING APPARATUS FOR IMAGING AN OBJECT

Номер: WO2013114257A3
Принадлежит:

The invention relates to an imaging apparatus for imaging an object. A geometric relation determination unit (10) determines a geometric relation between first and second images of the object, wherein a marker determination unit (14) determines corresponding marker locations in the first and second images and marker appearances based on the geometric relation such that the marker appearances of a first marker to be located at a first location in the first image and of a second marker to be located at a second corresponding location in the second image are indicative of the geometric relation. The images with the markers at the respective corresponding locations are shown on a display unit (16). Since the marker appearances are indicative of the geometric relation between the images, a comparative reviewing of the images can be facilitated, in particular, if they correspond to different viewing geometries.

Подробнее
22-03-2007 дата публикации

FRAME AND PIXEL BASED MATCHING OF MODEL-GENERATED GRAPHICS IMAGES TO CAMERA FRAMES

Номер: WO000002007031947A3
Автор: TAPANG, Carlos
Принадлежит:

This invention allows for triangulation of the camera position without the usual scene analysis and feature recognition. It utilizes an a priori, accurate model of the world within the field of vision. The 3D model is rendered onto a graphics surface using the latest graphics processing units. Each frame coming from the camera is then searched for a best match in a number of candidate renderings on the graphics surface. The count of rendered images to compare to is made small by computing the change in camera position and angle of view from one frame to another, and then using the results of such computations to limit the next possible positions and angles of view to render the a priori world model. The main advantage of this invention over prior art is the mapping of the real world onto a world model.

Подробнее
04-08-2011 дата публикации

IMAGE-BASED GLOBAL REGISTRATION SYSTEM AND METHOD APPLICABLE TO BRONCHOSCOPY GUIDANCE

Номер: WO2011094518A3
Принадлежит:

A global registration system and method identifies bronchoscope position without the need for significant bronchoscope maneuvers, technician intervention, or electromagnetic sensors. Virtual bronchoscopy (VB) renderings of a 3D airway tree are obtained including VB views of branch positions within the airway tree. At least one real bronchoscopic (RB) video frame is received from a bronchoscope inserted into the airway tree. An algorithm according to the invention is executed on a computer to identify the several most likely branch positions having a VB view closest to the received RB view, and the 3D position of the bronchoscope within the airway tree is determined in accordance with the branch position identified in the VB view. The preferred embodiment involves a fast local registration search over all the branches in a global airway-bifurcation search space, with the weighted normalized sum of squares distance metric used for finding the best match.

Подробнее
16-05-2013 дата публикации

METHOD AND SYSTEM FOR DETERMINING A RELATION BETWEEN A FIRST SCENE AND A SECOND SCENE

Номер: WO2013070125A1
Принадлежит:

The present invention relates to a system (200) and method for determining a relation between a first scene and a second scene. The method comprises the steps of generating at least one sensor image of a first scene with at least one sensor; accessing information related to at least one second scene, said second scene encompassing said first scene, and matching the sensor image with the second scene to map the sensor image onto the second scene. The step of accessing information related to the at least one second scene comprises accessing a 3D map comprising geocoded 3D coordinate data. The mapping involves associating geocoding information to a plurality of positions in the sensor image based on the coordinate data of the second scene.

Подробнее
30-07-2015 дата публикации

SYSTEM FOR TRACKING CABLE TETHERED FROM MACHINE

Номер: US20150213605A1
Принадлежит: Caterpillar Inc.

A system for locating a cable tethered from a machine along a worksite is disclosed. The system includes a laser scanner and a color camera. A location unit generates a position of the machine. The system includes a processing device disposed on the machine and in communication with the laser scanner, the color camera and the location unit. The processing device determines a location of the cable based on signals from the laser scanner and the color camera. The system further includes a server remotely located with respect to the machine and disposed in communication with the processing device. The server is configured to record locations of the cable at different instances of time and generates a map of the cable based on the locations of the cable.

Подробнее
09-04-2014 дата публикации

METHOD OF DETERMINING THE ORIENTATION OF THE UPPER PART OF A STACK OF WORKPIECES

Номер: EP2497065B1
Автор: LUSTIG, Stefan
Принадлежит: TRUMPF Maschinen Austria GmbH & Co. KG.

Подробнее
15-08-2012 дата публикации

HUMAN TRACKING SYSTEM

Номер: EP2486545A2
Принадлежит:

Подробнее
07-09-2016 дата публикации

METHOD AND APPARATUS FOR ROAD WIDTH ESTIMATION

Номер: EP3063552A1
Автор: MA, Xiang, CHEN, Xin
Принадлежит:

Подробнее
17-02-2003 дата публикации

Номер: JP0003377465B2
Автор:
Принадлежит:

Подробнее
20-11-2013 дата публикации

Номер: JP0005345947B2
Автор:
Принадлежит:

Подробнее
27-06-2012 дата публикации

Номер: JP0004961860B2
Автор:
Принадлежит:

Подробнее
10-09-2014 дата публикации

Номер: JP0005587861B2
Автор:
Принадлежит:

Подробнее
26-04-2013 дата публикации

PROCESS OF LOCALIZATION Of OBJECTS PER RESOLUTION IN THREE-DIMENSIONAL SPACE OF the SCENE

Номер: FR0002981771A1

L'invention se situe dans le domaine de la vidéosurveillance par caméras calibrées. Elle concerne un procédé de localisation d'objets d'intérêts dans des images. Le procédé selon l'invention se caractérise en ce qu'il utilise, d'une part, une carte de présence initiale p1CP modélisant des positions i dans la scène et comprenant, pour chaque position i, une valeur p1CP(i) représentative de la probabilité qu'un objet d'intérêt se trouve à la position i considérée, chaque valeur p1CP(i) étant obtenue à partir d'un critère de localisation défini dans un espace image du système d'acquisition d'images et, d'autre part, des atomes Ai prédéterminés pour chaque position i de la carte de présence p1CP, l'atome Ai d'une position i comprenant, pour chaque position j, une valeur Ai(j) représentative du recouvrement entre la projection m'(i) dans l'espace image d'un modèle tridimensionnel M'(i) placé à la position i et la projection m'(j) dans l'espace image d'un modèle tridimensionnel M'(j) placé à ...

Подробнее
15-05-2014 дата публикации

Method of Indoor Position Detection Based on Images and Mobile Device Employing the Method

Номер: KR1020140058861A
Автор:
Принадлежит:

Подробнее
22-07-2015 дата публикации

이미지―기반 실내 포지션 결정

Номер: KR1020150085088A
Принадлежит:

... 일 구현에서, 방법은 빌딩의 실내 부분의 이미지 내의 라인들의 포지션들 또는 수에 적어도 부분적으로 기초하여 빌딩의 실내 부분의 위상학적 표현을 결정하는 단계, 및 빌딩의 실내 부분의 잠재적인 포지션을 결정하기 위해, 위상학적 표현과, 예를 들면, 빌딩의 디지털 맵 내의 하나 이상의 저장된 위상학적 표현들을 비교하는 단계를 포함할 수 있다.

Подробнее
19-06-2015 дата публикации

분지형 해부학적 구조물 내에서의 의료 장치의 위치 결정

Номер: KR1020150068382A
Принадлежит:

... 해부학적 구조물을 통해 이동하는 의료 장치의 원위 단부의 시야로부터 포착된 순차적 영상들로부터 추출된 정보가 해부학적 구조물의 컴퓨터 모델로부터 추출된 대응하는 정보와 비교된다. 순차적 영상들로부터 추출된 정보와 컴퓨터 모델로부터 추출된 대응하는 정보 사이의 가장 가능성 있는 매칭이 그 다음 잠재적인 매칭들의 세트와 관련된 확률을 사용하여 결정되어, 의료 장치에 대해 해부학적 구조물의 컴퓨터 모델을 정합시키고, 이에 의해 의료 장치가 현재 있는 해부학적 구조물의 내강을 결정한다. 센서 정보가 잠재적인 매칭들의 세트를 제한하기 위해 사용될 수 있다. 영상들의 시퀀스 및 잠재적인 매칭들의 세트와 관련된 특징 속성이 가장 가능성 있는 매칭의 결정의 일부로서 정량적으로 비교될 수 있다.

Подробнее
10-05-2012 дата публикации

ESTIMATING POSITION AND ORIENTATION OF AN UNDERWATER VEHICLE RELATIVE TO UNDERWATER STRUCTURES

Номер: WO2012061134A2
Принадлежит:

A method and system that can be used for scanning underwater structures. For example, the method and system estimate a position and orientation of an underwater vehicle relative to an underwater structure, such as by directing an acoustic sonar wave 5toward an underwater structure, and processing the acoustic sonar wave reflected by the underwater structure to produce a three dimensional image of the structure. The data points of this three dimensional image are compared to a pre-existing three dimensional model of the underwater structure. Based on the comparison, a position and orientation of an underwater vehicle relative to the underwater structure can be determined.

Подробнее
03-07-2008 дата публикации

HUMAN POSE ESTIMATION AND TRACKING USING LABEL

Номер: WO2008079541A2
Принадлежит:

A method and apparatus for estimating poses of a subject by grouping data points generated by a depth image into groups representing labeled parts of the subject, and then fitting a model representing the subject to the data points using the grouping of the data points. The grouping of the data points is performed by grouping the data points to segments based on proximity of the data points, and then using constraint conditions to assign the segments to the labeled parts. The model is fitted to the data points by using the grouping of the data points to the labeled parts.

Подробнее
10-10-2013 дата публикации

IMAGE REGISTRATION APPARATUS

Номер: US20130266230A1
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS N.V.

The invention relates to an image registration apparatus for registering a first image and a second image with respect to each other. A model, which has a fixed topology, is adapted to the first image for generating a first adapted model and to the second image for generating a second adapted model,and corresponding image elements () are determined in the first image and in the second image based on spatial positions of first image elements in the first image with respect to the first adapted model and spatial positions of second image elements in the second image with respect to the second adapted model. Since the model has a fixed topology, corresponding image elements can relatively reliably be found based on the adapted models, even if the first and second images show objects having complex properties like a heart, thereby improving the registration quality. 1. An image registration apparatus for registering a first image and a second image with respect to each other , the image registration apparatus comprising:an image providing unit for providing a first image of a first object and a second image of a second object, wherein the first object and the second object are of the same type,a model providing unit for providing a model of the first object and of the second object, wherein a number of model elements of the model and neighboring relations between the model elements are fixed,an adaptation unit for adapting the model to the first image for generating a first adapted model and to the second image for generating a second adapted model, wherein the adaptation unit is adapted to determine an internal energy by registering a shape model with the provided model and by determining deviations between the shape model and the provided model for adapting the provided model to the first image and to the second image, anda corresponding image elements determining unit for determining corresponding image elements in the first image and in the second image based on ...

Подробнее
30-01-2014 дата публикации

POSITION AND ORIENTATION CALIBRATION METHOD AND APPARATUS

Номер: US20140029800A1
Принадлежит: CANON KABUSHIKI KAISHA

A position and orientation measuring apparatus calculates a difference between an image feature of a two-dimensional image of an object and a projected image of a three-dimensional model in a stored position and orientation of the object projected on the two-dimensional image. The position and orientation measuring apparatus further calculates a difference between three-dimensional coordinate information and a three-dimensional model in the stored position and orientation of the object. The position and orientation measuring apparatus then converts a dimension of the first difference and/or the second difference to cause the first difference and the second difference to have an equivalent dimension and corrects the stored position and orientation. 1. An apparatus comprising:an approximate coarse position and orientation obtain unit configured to obtain an approximate position and orientation of an object;a two-dimensional image obtain unit configured to obtain a two-dimensional image of the object;a three-dimensional image obtain unit configured to obtain three-dimensional coordinate information of a surface of the object;a holding unit configured to hold a three-dimensional model of the object;a detecting unit configured to detect an image feature from the two-dimensional image;a first associating unit configured to associate the detected image feature with a feature of the held three-dimensional model based on the approximate position and orientation of the object;a second associating unit configured to associate the three-dimensional coordinate information of a surface of the object with the feature of the held three-dimensional model based on the approximate position and orientation of the object;and a deriving unit configured to derive a position and orientation of the object based on an associated result and the approximate position and orientation of the object.2. The apparatus according to claim 1 , wherein the three-dimensional model is CAD model.3. The ...

Подробнее
06-02-2014 дата публикации

THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING DEVICE, THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING SYSTEM, AND THREE-DIMENSIONAL POINT CLOUD POSITION DATA PROCESSING METHOD AND PROGRAM

Номер: US20140037194A1

A technique is provided for efficiently process three-dimensional point cloud position data that are obtained at different viewpoints. A projecting plane is set in a measurement space as a parameter for characterizing a target plane contained in plural planes that form an object. The target plane and other planes are projected on the projecting plane. Then, a distance between each plane and the projecting plane is calculated at each grid point on the projecting plane, and the calculated matrix data is used as a range image that characterizes the target plane. The range image is also formed with respect to the other planes and with respect to planes that are viewed from another viewpoint. The range images of the two viewpoints are compared, and a pair of the planes having the smallest difference between the range images thereof is identified as matching planes between the two viewpoints. 1. A three-dimensional point cloud position data processing device comprising:a three-dimensional point cloud position data obtaining unit for obtaining first and second three-dimensional point cloud position data of an object, and the first and the second three-dimensional point cloud position data including points of planes and obtained at a first viewpoint and at a second viewpoint, respectively;a plane extracting unit for adding identical labels to the points in the same planes and extracting plural first planes and plural second planes, based on each of the first and the second three-dimensional point cloud position data, the first planes forming the object viewed from the first viewpoint, and the second planes forming the object viewed from the second viewpoint;a relative position calculating unit for calculating a relative position between a target plane and each of the other planes at each location with respect to each of the first planes and the second planes; anda matching plane identifying unit for comparing the relative positions of the first planes and the relative ...

Подробнее
06-03-2014 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM

Номер: US20140067126A1
Принадлежит: CANON KABUSHIKI KAISHA

There is provided with an information processing apparatus. An image including a target object is acquired. A coarse position and orientation of the target object is acquired. Information of a plurality of models which indicate a shape of the target object with different accuracy is held. A geometrical feature of the target object in the acquired image is associated with a geometrical feature indicated by at least one of the plurality of models placed at the coarse position and orientation. A position and orientation of the target object is estimated based on the result of association. 1. An information processing apparatus comprising:an image acquisition unit configured to acquire an image including a target object;a unit configured to acquire a coarse position and orientation of the target object;a holding unit configured to hold information of a plurality of models which indicate a shape of the target object with different accuracy;an associating unit configured to associate a geometrical feature of the target object in the acquired image with a geometrical feature indicated by at least one of the plurality of models placed at the coarse position and orientation; andan estimation unit configured to estimate a position and orientation of the target object based on the result of association.2. The information processing apparatus according to claim 1 , wherein the associating unit is further configured to select a model out of the plurality of models in accordance with a predetermined condition claim 1 , and to associate the geometrical feature of the target object and the geometrical feature indicated by the selected model.3. The information processing apparatus according to claim 2 , wherein the associating unit comprises a selection unit configured to select a model out of the plurality of models based on comparison between an index value calculated while repeating the estimation and a threshold claim 2 , andthe estimation unit comprises: 'a unit configured to ...

Подробнее
01-01-2015 дата публикации

APPARATUS AND METHOD FOR DETECTING LESION

Номер: US20150003677A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An apparatus and method for detecting a lesion, which enables to adaptively determine a parameter value of a lesion detection process using a feature value extracted from a received medical image and a parameter prediction model to improve accuracy in lesion detection and lesion diagnosis. The apparatus and the method include a model generator configured to generate a parameter prediction model based on pre-collected medical images, an extractor configured to extract a feature value from a received medical image, and a determiner configured to determine a parameter value of a lesion detection process using the extracted feature value and the parameter prediction model. 1. An apparatus to detect a lesion , comprising:a model generator configured to generate a parameter prediction model based on pre-collected medical images;an extractor configured to extract a feature value from a received medical image; anda determiner configured to determine a parameter value of a lesion detection process using the extracted feature value and the parameter prediction model.2. The apparatus of claim 1 , wherein the extractor is further configured to extract at least one of a global feature value claim 1 , a local feature value claim 1 , and a meta feature value.3. The apparatus of claim 1 , wherein the model generator is further configured to generate the parameter prediction model using claim 1 , as training data claim 1 , a feature value extracted from each of the pre-collected medical images and a parameter value optimized for each of the pre-collected medical images.4. The apparatus of claim 1 , wherein the lesion detection process employs an energy function.5. The apparatus of claim 1 , wherein the lesion detection process is a level set process.6. The apparatus of claim 1 , further comprising:a detector configured to detect a lesion from the received medical image using the lesion detection process applied with the determined parameter value.7. The apparatus of claim 1 , ...

Подробнее
05-01-2017 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20170004385A1
Автор: AOBA Masato
Принадлежит:

In a case where generating a training image of an object to be used to generate a dictionary to be referred to in image recognition processing of detecting the object from an input image, model information of an object to be detected is set, and a luminance image of the object and a range image are input. The luminance distribution of the surface of the object is estimated based on the luminance image and the range image, and the training image of the object is generated based on the model information and the luminance distribution. 119.-. (canceled)20. An image processing apparatus for generating a training image for an object to be used to generate a dictionary to be referred to in image recognition processing of detecting the object from an input image , comprising:a first obtaining unit configured to obtain a luminance image of a plurality of objects and a range image of the object;a determination unit configured to determine, based on the luminance image and the range image, a luminance value for a training image to be generated on the basis of model information of the object;a generation unit configured to generate a training image of the object based on the determined luminance value and the model information,wherein the first obtaining unit, the determining unit, and the generation unit are implemented by using at least one processor.21. The apparatus according to claim 20 , further comprising an estimation unit configured to estimate a relation between a luminance value and a direction of a surface of the object based on the luminance image and the range image claim 20 ,wherein the determination unit determines the luminance value based on the estimated relation.22. The apparatus according to claim 20 , wherein the model information is computer-aided design (CAD) data.23. The apparatus according to claim 22 , wherein the training image is a computer graphics image.24. The apparatus according to claim 20 , wherein the generation unit generates the training ...

Подробнее
07-01-2016 дата публикации

SYSTEM FOR ACCURATE 3D MODELING OF GEMSTONES

Номер: US20160004926A1
Принадлежит:

A computerized system, kit and method for producing an accurate 3D-Model of a gemstone by obtaining an original 3D-model of an external surface of the gemstone; imaging at least one selected junction with only portions of its associated facets and edges disposed adjacent the junction, the location of the junction being determined based on information obtained at least partially by using the original 3D model; analyzing results of the imaging to obtain information regarding details of the gemstone at the junction; and using the information for producing an accurate 3D-model of said external surface of the gemstone, which is more accurate than the original 3-D model. 150-. (canceled)51. A method for producing a 3D-Model of an external surface of a gemstone , said method comprising:a) taking a plurality of images of the gemstone and using the plurality of images for generating an original 3D-model of an external surface of said gemstone including facets, edges abounding said facets, and junctions each constituting an area of meeting of at least three said edges associated with at least two facets;b) using the original 3D model generated in step a) to obtain information, based on which location of one or more selected junctions is determined, and subsequently imaging an area of each such selected junction with only portions of associated facets thereof and edges disposed adjacent said selected junction, said imaging being performed under illumination conditions different from those, at which said plurality of images were taken and providing such contrast between adjacent facets as to allow to distinguish an edge therebetween;c) analyzing results of said imaging to obtain information regarding the area imaged in step b); andd) using the information obtained in step c) for producing an improved 3D-model of said external surface of the gemstone, which is more accurate than the original 3-D model.52. The method according to claim 51 , wherein the gemstone has a planned cut ...

Подробнее
07-01-2016 дата публикации

Service provision program

Номер: US20160005177A1
Автор: Sokichi Fujita
Принадлежит: Fujitsu Ltd

A non-transitory recording medium storing a program that causes a computer to execute a process, the process including: generating a modified image by executing modification processing on an image of a mark affixed to a product; and providing the generated modified image as a determination-use image employable in determination as to whether or not the product affixed with the mark is included in a captured image.

Подробнее
07-01-2016 дата публикации

SYSTEM AND METHOD FOR SEGMENTATION OF LUNG

Номер: US20160005193A1
Принадлежит:

Disclosed are systems, devices, and methods for determining pleura boundaries of a lung, an exemplary method comprising acquiring image data from an imaging device, generating a set of two-dimensional (2D) slice images based on the acquired image data, determining, by a processor, a seed voxel in a first slice image from the set of 2D slice images, applying, by the processor, a region growing process to the first slice image from the set of 2D slice images starting with the seed voxel using a threshold value, generating, by the processor, a set of binarized 2D slice images based on the region grown from the seed voxel, filtering out, by the processor, connected components of the lung in each slice image of the set of binarized 2D slice images, and identifying, by the processor, the pleural boundaries of the lung based on the set of binarized 2D slice images. 1. A segmentation method for determining pleura boundaries of a lung , comprising:acquiring image data from an imaging device;generating a set of two-dimensional (2D) slice images based on the acquired image data;determining, by a processor, a seed voxel in a first slice image from the set of 2D slice images;applying, by the processor, a region growing process to the first slice image from the set of 2D slice images starting with the seed voxel using a threshold value;generating, by the processor, a set of binarized 2D slice images based on the region grown from the seed voxel;filtering out, by the processor, connected components of the lung in each slice image of the set of binarized 2D slice images; andidentifying, by the processor, the pleural boundaries of the lung based on the set of binarized 2D slice images.2. The segmentation method according to claim 1 , wherein the seed voxel is in a portion of the first slice image from the set of binarized 2D slice images corresponding to a trachea of the lung.3. The segmentation method according to claim 1 , wherein the threshold value is greater than or equal to an ...

Подробнее
07-01-2016 дата публикации

DYNAMIC 3D LUNG MAP VIEW FOR TOOL NAVIGATION INSIDE THE LUNG

Номер: US20160005220A1
Принадлежит:

A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model. 1. A method for implementing a dynamic three-dimensional (3D) lung map view for navigating a prove inside a patient's lungs , the method comprising:loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images;inserting the probe into a patient's airways, the probe including a location sensor in operative communication with the navigation system;registering a sensed location of the probe with the planned pathway;selecting a target in the navigation plan;presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe;navigating the probe through the airways of the patient's lungs toward the target;iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe; andupdating the presented view by removing at least a part of an object forming part of the 3D model.2. The method according to claim 1 , wherein iteratively adjusting the presented view of the 3D model includes zooming in when the probe approaches the ...

Подробнее
07-01-2016 дата публикации

Photometric optimization with t-splines

Номер: US20160005221A1
Принадлежит: Qualcomm Inc

One example method is disclosed that includes the steps of capturing a plurality of images of a scene, wherein each of the plurality of images of the scene captures a different perspective of a portion of an object; establishing a three-dimensional (“3D”) model of the object using at least some of the plurality of images of the scene; initializing a T-spline based at least in part on the 3D model; determining a first photometric error associated with the 3D model and the T-spline; and optimizing the T-spline based on the first photometric error to create an optimized T-spline.

Подробнее
08-01-2015 дата публикации

Method for Determining Object Poses Using Weighted Features

Номер: US20150010202A1
Принадлежит:

A method for determining a pose of an object in a scene by determining a set of scene features from data acquired of the scene and matching the scene features to model features to generate weighted candiate poses when the scene feature matches one of the model features, wherein the weight of the candidate pose is proportional to the model weight. Then, the pose of the object is determined from the candidate poses based on the weights. 1. A method for determining a pose of an object in a scene , comprising the steps of:determining, from a model of the object, model features and a weight associated with each model feature;determining, from scene data acquired of the scene, scene features;matching the scene features to the model features to obtain a matching scene and matching model features;generating candidate poses from the matching scene and the matching model features, wherein a weight of each candidate pose is proportinal to the weight associated with the matching model feature; anddetermining the pose of the object from the candidate poses based on the weights.2. The method of claim 1 , wherein the model features and the weights are learned using training data by maximizing a difference between a number of votes received by a true pose and a number of votes received by an incorrect pose.3. The method of claim 1 , wherein a descriptor is determined for each feature and the matching uses a distance function of two descriptors.4. The method of claim 1 , wherein the features are oriented point pair features.5. The method of claim 1 , wherein the pose is determined by clustering the candidate poses.6. The method of claim 5 , wherein the clustering merges two candidate poses by taking a weighted stun of the candidate poses according to the weights associated with the candidate poses.8. The method of claim 1 , wherein the scene data are a 3D point cloud.9. The method of claim 1 , wherein the model features are stored in a hash table claim 1 , the scene features are ...

Подробнее
12-01-2017 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20170011523A1
Принадлежит:

An image processing apparatus includes an acquisition unit, a first detection unit, a selection unit, and a correction unit. The acquisition unit acquires an image including a target object having a plurality of parts. The first detection unit detects a candidate region of each of the plurality of parts of the target object included in the acquired image using a previously learned model. The selection unit selects, based on the candidate region detected by the first detection unit, a first part having relatively high reliability and a second part having relatively low reliability from among the plurality of parts. The correction unit corrects the model by changing a position of the second part based on the first part selected by the selection unit. 1. An image processing apparatus comprising:an acquisition unit configured to acquire an image including a target object having a plurality of parts;a first detection unit configured to detect a candidate region of each of the plurality of parts of the target object included in the acquired image using a previously learned model;a selection unit configured to select, based on the candidate region detected by the first detection unit, a first part having relatively high reliability and a second part having relatively low reliability from among the plurality of parts; anda correction unit configured to correct the model by changing a position of the second part based on the first part selected by the selection unit.2. The image processing apparatus according to claim 1 , further comprising a second detection unit configured to detect positions of the plurality of parts of the target object using the model corrected by the correction unit.3. The image processing apparatus according to claim 1 , wherein the selection unit calculates a variance-covariance matrix of the candidate region detected for each of the plurality of parts claim 1 , and selects claim 1 , as the first part claim 1 , a part in which a predetermined element ...

Подробнее
14-01-2016 дата публикации

INFORMATION PROCESSING APPARATUS RECOGNIZING CERTAIN OBJECT IN CAPTURED IMAGE, AND METHOD FOR CONTROLLING THE SAME

Номер: US20160012599A1
Автор: Kuboyama Hideo
Принадлежит:

An information processing apparatus includes an image obtaining unit configured to obtain an input image, an extraction unit configured to extract from the input image one or more regions corresponding to one or more objects included in a foreground of the operation surface in accordance with the reflected positional information and positional information of the operation surface in the space, a region specifying unit configured to specify an isolation region, which is not in contact with a boundary line which defines a predetermined closed region in the input image, from among the one or more regions extracted by the extraction unit, and a recognition unit configured to recognize an adjacency state of a predetermined instruction object relative to the operation surface in accordance with the positional information reflected from the portion corresponding to the isolation region as specified by the region specifying unit. 1. An information processing apparatus comprising:an image obtaining unit configured to obtain an input image on which positional information in a space including an operation surface as a portion of a background is reflected;an extraction unit configured to extract one or more regions corresponding to one or more objects included in a foreground of the operation surface from the input image in accordance with the positional information reflected on the input image obtained by the image obtaining unit and positional information of the operation surface in the space;a region specifying unit configured to specify an isolation region which is not in contact with a boundary line which defines a predetermined closed region in the input image from among the one or more regions extracted by the extraction unit; anda recognition unit configured to recognizes an adjacency state of a predetermined instruction object relative to the operation surface in accordance with the positional information reflected on the isolation region in the input image in a ...

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, PROGRAM, STORAGE MEDIUM, PRODUCTION APPARATUS, AND METHOD OF PRODUCING ASSEMBLY

Номер: US20160012600A1
Автор: Kitajima Hiroshi
Принадлежит:

A tentative local score between a point in a feature image in a template image and a point, in a target object image, at a position corresponding to the point in the feature image is calculated, and a determination is performed as to whether the tentative local score is smaller than 0. In a case where the tentative local score is greater than or equal to 0, the tentative local score is employed as a local score. In a case where the tentative local score is smaller than 0, the tentative local score is multiplied by a coefficient and the result is employed as a degree of local similarity. 1. An image processing method for performing image processing by an image processing apparatus using a first pyramid including a plurality of template images having different first resolutions and hierarchized in layers according to the first resolutions , a second pyramid including a plurality of target object images having different second resolutions from each other but equal to the respective first resolutions of the template images in the first pyramid and hierarchized in layers according to the second resolutions such that an image similar to a feature image included in one of the template images in the first pyramid is searched for from one of the target object images in the second pyramid by evaluating a degree of layer-to-layer similarity between the first and second pyramids in an order of resolution from the lowest to highest , the method comprising:calculating a degree of local similarity between a point in the feature image and a corresponding point in the target object on a point-by-point basis for each of all points in the feature image; andcalculating the degree of the similarity between the feature image and the target object image by determining the sum of the calculated degrees of local similarity and normalizing the sum,the calculating the degree of local similarity includingcalculating a tentative degree of local similarity between a point in the feature image ...

Подробнее
14-01-2016 дата публикации

CT SYSTEM FOR SECURITY CHECK AND METHOD THEREOF

Номер: US20160012647A1
Принадлежит:

A CT system for security check and a method thereof are provided. The method includes: reading inspection data of an inspected object; inserting at least one three-dimensional (3D) Fictional Threat Image (FTI) into a 3D inspection image of the inspected object, which is obtained from the inspection data; receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a two-dimensional (2D) inspection image including a 2D FTI corresponding to the 3D FTI, wherein the 2D inspection image is obtained from the 3D inspection image or is obtained from the inspection data; and providing a feedback of the 3D inspection image including at least one 3D FTI in response to the selection. With the above solution, it is convenient for a user to rapidly mark a suspected object in the CT image, and provides a feedback of whether a FTI is included. 1. A method in a Computed Tomography (CT) system for security check , comprising steps of:reading inspection data of an inspected object;inserting at least one three-dimensional (3D) Fictional Threat Image (FTI) into a 3D inspection image of the inspected object, wherein the 3D inspection image is obtained from the inspection data;receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a two-dimensional (2D) inspection image including a 2D FTI corresponding to the 3D FTI, wherein the 2D inspection image is obtained from the 3D inspection image or is obtained from the inspection data; andproviding a feedback of the 3D inspection image including at least one 3D FTI in response to the selection.2. The method according to claim 1 , wherein the step of receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a 2D inspection image including a 2D FTI corresponding to the 3D FTI comprises:receiving coordinate positions of a part of the 3D inspection image or the 2D ...

Подробнее
15-01-2015 дата публикации

PITCH DETERMINATION SYSTEMS AND METHODS FOR AERIAL ROOF ESTIMATION

Номер: US20150016689A1
Автор: Pershing Chris
Принадлежит:

User interface systems and methods for roof estimation are described. Example embodiments include a roof estimation system that provides a user interface configured to facilitate roof model generation based on one or more aerial images of a building roof. In one embodiment, roof model generation includes image registration, image lean correction, roof section pitch determination, wire frame model construction, and/or roof model review. The described user interface provides user interface controls that may be manipulated by an operator to perform at least some of the functions of roof model generation. In one embodiment, the user interface provides user interface controls that facilitate the determination of pitch of one or more sections of a building roof. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims. 1. A computer-implemented process in a roof estimation system comprising:displaying, by the roof estimation system, a graphical user interface including a first aerial image of a roof structure of a building and also at least one first visual marker that is moveable by a user in a same display window as the first aerial image while said first aerial image is displayed within the graphical user interface;moving the first visual marker with respect to the first aerial image of the roof structure to a first location in response to input from the user;storing data in a memory of the computer of the first location to which the first visual marker was moved;displaying a second aerial image of the roof structure of the building, the second aerial image providing a different view of the roof than the first aerial image; anddisplaying a location of a second visual marker on the roof structure of the building in the second aerial image of the roof structure based on an indication received from the stored data in the memory of the first ...

Подробнее
15-01-2015 дата публикации

IMAGING APPARATUS FOR IMAGING AN OBJECT

Номер: US20150016704A1
Принадлежит:

The invention relates to an imaging apparatus for imaging an object. A geometric relation determination unit () determines a geometric relation between first and second images of the object, wherein a marker determination unit () determines corresponding marker locations in the first and second images and marker appearances based on the geometric relation such that the marker appearances of a first marker to be located at a first location in the first image and of a second marker to be located at a second corresponding location in the second image are indicative of the geometric relation. The images with the markers at the respective corresponding locations are shown on a display unit (). Since the marker appearances are indicative of the geometric relation between the images, a comparative reviewing of the images can be facilitated, in particular, if they correspond to different viewing geometries. 11. An imaging apparatus for imaging an object , the imaging apparatus () comprising:{'b': 7', '25, 'a first image providing unit () for providing a first image () of the object,'}{'b': 11', '27, 'a second image providing unit () for a providing a second image () of the object,'}{'b': 10', '25', '27, 'a geometric relation determination unit () for determining a geometric relation between the first image () and the second image (),'}{'b': 14', '25', '27', '25', '27', '30', '26', '25', '27, 'a marker determination unit () for determining corresponding marker locations in the first and second images (, ) and marker appearances based on the geometric relation such that a first location in the first image () and a second location in the second image () show the same part of the object and such that the marker appearances of a first marker () to be located at the first location and of a second marker () to be located at the second location are indicative of the geometric relation between the first image () and the second image (),'}{'b': 16', '25', '30', '27', '26, 'a display ...

Подробнее
19-01-2017 дата публикации

METHOD AND SYSTEM FOR DETECTION OF CONTRABAND NARCOTICS IN HUMAN DIGESTIVE TRACT

Номер: US20170017860A1
Принадлежит:

A method for automated detection of illegal substances smuggled inside internal cavities of a passenger, e.g., as capsules. The method provides for an automated detection of narcotics hidden in a passenger's stomach area using pictures produced by an X-ray scanner. According to an exemplary embodiment, throughput of the scanner is increased by an automated detection algorithm, which takes less time than visual analysis by an operator. The operator is only involved in cases when narcotics are detected. The automated detection method has a consistent precision, because the effects of tiredness of the operator are eliminated. Efficiency and costs of the process are improved, since fewer qualified operators can service several scanners. 1. A method for an automated detection of swallowed capsules on X-ray scanner images , the method comprising:(a) acquiring an incoming image of a person passing through the scanner;(b) generating additional images based on the incoming image by performing transformations of the incoming image;(c) determining an upper body area on the incoming image;(d) calculating a position of a stomach area on the upper body area of the incoming image and on the additional images;(e) classifying segmented regions of the stomach area of the incoming image;(f) calculating geometrical and intensity features along with rotationally invariant periodic features for windows on the stomach area;(g) detecting suspected windows on the stomach area;(h) calculating aggregate features for the properties of the suspected windows;(i) using a model for the images that do not contain swallowed capsules classifying the incoming image using a threshold for a dissimilarity function; and(j) informing a user that the incoming image contains the swallowed capsules if the dissimilarity function for aggregate features for the incoming image is larger than or equal to the threshold.2. The method of claim 1 , wherein periodic features is used to calculate the rotationally ...

Подробнее
19-01-2017 дата публикации

METHOD FOR ASSEMBLING SINGLE IMAGES RECORDED BY A CAMERA SYSTEM FROM DIFFERENT POSITIONS, TO FORM A COMMON IMAGE

Номер: US20170018085A1
Принадлежит:

A method for assembling single images recorded by a camera system from different positions, to form a common image, including providing a first projection surface having a first geometric shape and a second projection surface having a second geometric shape; every point of the first projection surface having an associated point on the second projection surface; acquiring positional information that describes a configuration of objects shown in the single images relative to the camera system; reshaping the first geometric shape of the first projection surface on the basis of the acquired positional information; assigning texture information pertaining to the single images to surface regions of the reshaped first projection surface; transferring texture information from the points of the first projection surface to the respective, associated points of the second projection surface; and producing the common image from a view of the second projection surface. 19-. (canceled)10. A method for assembling single images recorded by a camera system from different positions , to form a common image , the method comprising:providing a first projection surface having a first geometric shape and a second projection surface having a second geometric shape, every point of the first projection surface having an associated point on the second projection surface;acquiring positional information that describes a configuration of objects shown in the single images relative to the camera system;reshaping the first geometric shape of the first projection surface based on the acquired positional information;assigning texture information pertaining to the single images to surface regions of the reshaped first projection surface;transferring texture information from the points of the first projection surface to the respective, associated points of the second projection surface; andproducing the common image from a view of the second projection surface.11. The method of claim 10 , wherein the ...

Подробнее
19-01-2017 дата публикации

Three dimensional content generating apparatus and three dimensional content generating method thereof

Номер: US20170018088A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A three dimensional (3D) content generating apparatus includes an inputter configured to receive a plurality of images of an object captured from different locations; a detector configured to identify the object and detect a predetermined feature point of the object from each of the plurality of images; a map former configured to extract 3D location information of the detected feature point, and configured to form at least one depth map with respect to a surface of the object based on the extracted 3D location information of the feature point; and a content generator configured to generate a 3D content of the object using the at least one depth map and the plurality of images.

Подробнее
22-01-2015 дата публикации

APPARATUS FOR RECOGNIZING OBJECTS, APPARATUS FOR LEARNING CLASSIFICATION TREES, AND METHOD FOR OPERATING SAME

Номер: US20150023557A1
Принадлежит: Samsung Electronics Co., Ltd

An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree. 1. An object recognition system , comprising:an input unit configured to receive, as an input, a depth image representing an object to be analyzed; anda processing unit configured to recognize a visible object part and a hidden object part of the object, from the depth image, using a classification tree.2. The object recognition system of claim 1 , further comprising:a volume constructing unit configured to construct a volume of the object in a single data space, based on the recognized visible object part and the recognized hidden object part.3. The object recognition system of claim 2 , wherein the processing unit extracts additional information regarding the object claim 2 , based on the volume.4. The object recognition system of claim 3 , wherein the additional information comprises information regarding at least one of a shape claim 3 , a pose claim 3 , a key joint claim 3 , and a structure that are associated with the object.5. The object recognition system of claim 2 , wherein the volume constructing unit constructs the volume claim 2 , based on a relative depth value stored in a leaf node of the classification tree claim 2 , andwherein the relative depth value indicates a difference value between a depth value of the recognized visible object part and a depth value of the recognized hidden object part.6. The object recognition system of claim 1 , wherein the processing unit applies the depth image to the classification tree claim 1 ,wherein, when a current node of the classification tree is a ...

Подробнее
28-01-2016 дата публикации

INTELLIGENT MOBILITY AID DEVICE AND METHOD OF NAVIGATING AND PROVIDING ASSISTANCE TO A USER THEREOF

Номер: US20160025499A1
Принадлежит:

An intelligent navigation device is provided for actively collecting data about a user and the surrounding environment, drawing helpful inferences, and actively aiding the user in navigation, environmental awareness, and social interaction. The intelligent navigation device may include cameras for detecting image data regarding the surrounding environment. The intelligent navigation device may include a GPS unit, an IMU, and a memory for storing previously determined user data. The intelligent navigation device may include a processor for determining a desirable action or event based on the image data, data detected by the GPS unit or the IMU, or a recognized object in the surrounding environment. The processor may further determine a destination and provide navigation assistance to the user for reaching the destination. The intelligent navigation device may convey output data using a display, a speaker, a vibration unit, a mechanical feedback unit, or an electrical stimulation unit. 1. An intelligent guidance device , comprising:a plurality of wheels for travelling on a ground surface;a platform coupled to the plurality of wheels;an inertial measurement unit (IMU) coupled to the platform and configured to detect inertial measurement data corresponding to a positioning, a velocity, or an acceleration of the intelligent navigation device;a global position system (GPS) unit configured to detect location data corresponding to a location of the intelligent navigation device;a plurality of cameras coupled to the platform for detecting image data corresponding to a surrounding environment of the intelligent guidance device;a memory storing object data regarding previously determined objects and storing previously determined user data regarding a user of the intelligent guidance device; proactively recognize an object in the surrounding environment by analyzing the image data based on the stored object data and at least one of the inertial measurement data or the location ...

Подробнее
22-01-2015 дата публикации

ENDOSCOPE SYSTEM AND METHOD FOR OPERATING ENDOSCOPE SYSTEM

Номер: US20150025316A1
Принадлежит: OLYMPUS MEDICAL SYSTEMS CORP.

An endoscope system includes an insertion portion, an objective optical window, an image pickup device, a position/direction detection section that acquires position information of the objective optical window, and a memory that records the subject internal image acquired by the image pickup device in association with the position information of the objective optical window. The endoscope system aligns the position information of the objective optical window with a reference position of a predetermined organ in the subject in a coordinate system of a three-dimensional model image based on an amount of change or the like of the subject internal image information in the subject and generates an image with the subject internal image pasted onto the two-dimensional model image of the predetermined organ which is the three-dimensional model image two-dimensionally developed in which the position of the objective optical window is associated with the position in the coordinate system. 1. An endoscope system comprising:an insertion portion that is inserted into a subject;an objective optical window that is provided on a distal end side of the insertion portion and receives light from the subject;an image pickup section that picks up an image of an inside of the subject from the light entering from the objective optical window;a position information acquiring section that acquires position information of the objective optical window;an alignment section that aligns the position of the objective optical window acquired from the position information acquiring section with a reference position of a predetermined organ in the subject in a coordinate system of a three-dimensional model image based on an amount of change of subject internal image information inside the subject, predetermined operation input or the position information with respect to a preset reference plane; andan image generating section that generates an image with the subject internal image pasted onto a two- ...

Подробнее
28-01-2016 дата публикации

System and Method for Probabilistic Object Tracking Over Time

Номер: US20160026245A1
Принадлежит:

A system and method are provided for object tracking in a scene over time. The method comprises obtaining tracking data from a tracking device, the tracking data comprising information associated with at least one point of interest being tracked; obtaining position data from a scene information provider, the scene being associated with a plurality of targets, the position data corresponding to targets in the scene; applying a probabilistic graphical model to the tracking data and the target data to predict a target of interest associated with an entity being tracked; and performing at least one of: using the target of interest to determine a refined point of interest; and outputting at least one of the refined point of interest and the target of interest. 1. A method of object tracking in a scene over time , the method comprising:obtaining tracking data from a tracking device, the tracking data comprising at least one point of interest computed by the tracking device;obtaining target data from a scene information provider, the scene comprising a plurality of targets, the target data corresponding to targets in the scene and each target being represented by one or more points in the scene;applying a probabilistic graphical model to the tracking data and the target data to predict, for each point of interest being tracked, an associated target of interest; and using the associated target of interest to refine the at least one point of interest; and', 'outputting at least one of a refined point of interest and the associated target of interest., 'performing at least one of2. The method of claim 1 , further comprising utilizing the refined point of interest to enhance tracking accuracy.3. The method of claim 2 , wherein the utilizing comprises one or more of:using the refined point of interest as input to a system receiving tracked signal data; andsending the refined point of interest to the tracking device, to assist in determining a true tracked signal.4. The method ...

Подробнее
26-01-2017 дата публикации

OBTAINING 3D MODELING DATA USING UAVS FOR CELL SITES

Номер: US20170024929A1
Автор: PRIEST Lee
Принадлежит: ETAK Systems, LLC

Systems and methods using an Unmanned Aerial Vehicle (UAV) to perform physical functions on a cell tower at a cell site include flying the UAV at or near the cell site, wherein the UAV comprises one or more manipulable members; moving the one or more manipulable members when proximate to a location at the cell tower where the physical functions are performed to effectuate the physical functions; and utilizing one or more counterbalancing techniques during the moving ensuring a weight distribution of the UAV remains substantially the same. 1. A method using an Unmanned Aerial Vehicle (UAV) to obtain data capture at a cell site for developing a three dimensional (3D) thereof , the method comprising:causing the UAV to fly a given flight path about a cell tower at the cell site;obtaining data capture during the flight path about the cell tower, wherein the data capture comprises a plurality of photos or video, wherein the flight path is subjected to a plurality of constraints for the obtaining, and wherein the data capture comprises one or more location identifiers; andsubsequent to the obtaining, processing the data capture to define a three dimensional (3D) model of the cell site based on one or more objects of interest in the data capture.2. The method of claim 1 , further comprising:remotely performing a site survey of the cell site utilizing a Graphical User Interface (GUI) of the 3D model to collect and obtain information about the cell site, the cell tower, one or more buildings, and interiors thereof.3. The method of claim 1 , wherein a launch location and launch orientation is defined for the UAV to take off and land at the cell site such that each flight at the cell site has the same launch location and launch orientation.4. The method of claim 1 , wherein the one or more location identifiers comprise at least two location identifiers comprising Global Positioning Satellite (GPS) and GLObal NAvigation Satellite System (GLONASS).5. The method of claim 1 , ...

Подробнее
29-01-2015 дата публикации

Light Source Detection From Synthesized Objects

Номер: US20150029192A1
Автор: Free Robert Mikio
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a location relative to an object and a type of a light source that illuminated the object when the image was captured, are described. A method performed by a process executing on a computer system includes identifying an object of interest in a digital image. The method further includes projecting at least a portion of the digital image corresponding to the object of interest onto a three dimensional (3D) model that includes a polygon-mesh corresponding to the object's shape. The method further includes determining one or more properties of a light source that illuminated the object in the digital image at an instant that the image was captured based at least in part on a characteristic of one or more polygons in the 3D model onto which the digital image portion was projected. 1. A method performed by a process executing on a computer system , the method comprising:identifying an object of interest in a digital image;projecting at least a portion of the digital image corresponding to the object of interest onto a three dimensional (3D) model, the projected portion having a color map;determining a location of a light source relative to the object of interest, wherein the light source illuminated the object of interest in the digital image at a time that the digital image was captured; anddetermining an orientation of the object of interest relative to the determined location of the light source.2. The method of claim 1 , wherein the light source is the sun claim 1 , and wherein determining the orientation of the object of interest is based on a time of day that the digital image was captured and a latitude indicating where the digital image was captured.3. The method of claim 1 , wherein the act of determining a location of a light source comprises back-tracing rays that travel from a viewing location relative to the 3D model to portions of a surface of ...

Подробнее
02-02-2017 дата публикации

SIGHT TRACKING METHOD AND DEVICE

Номер: US20170031437A1
Автор: Qian Chenfei, ZHAO Kening
Принадлежит: BOE Technology Group Co., Ltd.

Embodiments of the present disclosure relate to a sight tracking method and a device, the sight tracking method comprises: determining an observation region where an iris center of a to-be-tested iris image is located according to a target model; modifying a prediction region by using the observation region, to obtain a target region, the prediction region being a region where the iris center of the to-be-tested iris image is located determined by a kalman filtering method; and determining a position of fixation point of human eyes on a screen according to the target region. 1. A sight tracking method , comprising:determining an observation region where an iris center of a to-be-tested iris image is located according to a target model;modifying a prediction region by using the observation region, to obtain a target region, the prediction region being a region where the iris center of the to-be-tested iris image is located determined by a kalman filtering method; anddetermining a position of fixation point of human eyes on a screen according to the target region.2. The method according to claim 1 , before determining an observation region where an iris center of a to-be-tested iris image is located according to a target model claim 1 , the method further comprises:obtaining an visual feature parameter of each iris image in n iris images corresponding to a same vision region in a preset reference image, to obtain n visual feature parameters;determining a target parameter of an Extreme Learning Machine neural network, by inputting the n visual feature parameters to the Extreme Learning Machine neural network;determining the target model according to the target parameter and the Extreme Learning Machine neural network,the target model being a module obtained according to the target parameter and the Extreme Learning Machine neural network, the target parameter being a parameter obtained after the n visual feature parameters are input to the Extreme Learning Machine ...

Подробнее
05-02-2015 дата публикации

POSTURE ESTIMATING APPARATUS, POSTURE ESTIMATING METHOD AND STORING MEDIUM

Номер: US20150036879A1
Принадлежит:

The present invention aims to estimate a more consistent posture in regard to a multi-joint object. A target range image is first input, a human body region is extracted from the input range image, a target joint position candidate is calculated from the input range image, and a joint position is finally determined based on the calculated joint position candidate and a likelihood of each joint to estimate the posture. At this time, joint position permissible range information concerning inter-joint distance and angle of a human body model previously set by learning is obtained from a human body model storing unit, consistency is evaluated for a relation between the joint position candidates of a certain joint and other joint based on the obtained information, and thus the posture corresponding to the best combination of the joint positions is determined. 1. A posture estimating apparatus comprising:an inputting unit configured to input a range image including a multi-joint object;a deriving unit configured to derive at least one joint position candidate for each of a plurality of joint positions of the object, from the range image input by the inputting unit;a storing unit configured to store information including a positional relation between a joint in a multi-joint object model corresponding to the object and other joint; andan estimating unit configured to estimate a posture of the object on the basis of the information including the positional relation and the derived joint position candidate.2. The posture estimating apparatus according to claim 1 , whereinthe storing unit stores information including at least one of a distance and an angle between the joint in the multi-joint object model corresponding to the object and the other joint, andthe estimating unit estimates the posture of the object by combining the derived joint position candidates on the basis of the information including at least one of the distance and the angle.3. The posture estimating ...

Подробнее
11-02-2016 дата публикации

Sessionless pointing user interface

Номер: US20160041623A1
Принадлежит:

A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture. 1. A method for interaction with a computer , the method comprising:receiving, by a computer, a sequence of three-dimensional maps containing at least an arm, including an elbow and a hand, of a user of the computer;identifying, in the maps, a controlled device that is coupled to the computer;analyzing the maps to detect a gesture performed by the arm toward the controlled device; andactuating the controlled device responsively to the gesture on condition that the elbow is extended in the gesture at an angle no less than a predefined angular threshold.2. The method according to claim 1 , wherein the gesture is selected from a list consisting of a pointing gesture claim 1 , a grab gesture and a release gesture.3. The method according to claim 1 , wherein analyzing the maps comprises defining a pyramid shaped region having an apex meeting the user and a base encompassing the device claim 1 , and defining an interaction region within the pyramid shaped region claim 1 ,wherein the detected gesture comprises the user positioning the hand within the interaction region and moving the hand toward the controlled device.4. The method according to claim 1 , wherein the controlled device is actuated only when the elbow has been extended in the gesture for at least a predefined minimum time period.5. The method according to claim 1 , and comprising receiving a vocal command from the user claim 1 , wherein the controlled device is actuated in response to the gesture and the vocal command.6. The method according to claim 1 , and comprising detecting claim 1 , in the maps claim 1 , a gaze direction of the user claim 1 , wherein the controlled ...

Подробнее
09-02-2017 дата публикации

OBJECT LEARNING AND RECOGNITION METHOD AND SYSTEM

Номер: US20170039720A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An object recognition system is provided. The object recognition system for recognizing an object may include an input unit to receive, as an input, a depth image representing an object to be analyzed, and a processing unit to recognize a visible object part and a hidden object part of the object, from the depth image, by using a classification tree. The object recognition system may include a classification tree learning apparatus to generate the classification tree. 1. A computer-implemented object recognition system , comprising: receive a depth image representing an object,', 'use a classification tree to recognize, from the depth image of the object, a visible object part and a hidden object part, and', 'construct a volume of the object in a single data space, based on the recognized visible object part and the recognized hidden object part,', 'wherein the constructing of the volume is based on a relative depth value stored in a leaf node of the classification tree., 'a processor configured to'}2. The computer-implemented object recognition system of claim 1 , wherein the processor is further configured to:adjust at least one size of a width and a height of an object model of the object.3. The computer-implemented object recognition system of claim 1 , wherein the classification tree comprises a probability value of the visible object part claim 1 , and a probability value of the hidden object part.4. The computer-implemented object recognition system of claim 1 , wherein the classification tree comprises a relative depth value associated with the visible object part and the hidden object part.5. The computer-implemented object recognition system of claim 1 , wherein the classification tree is represented by using at least a portion of the hidden object part as a plurality of layers.6. The computer-implemented object recognition system of claim 1 , wherein the relative depth value indicates a difference value between a depth value of the recognized visible ...

Подробнее
09-02-2017 дата публикации

Generating an avatar from real time image data

Номер: US20170039752A1
Принадлежит: Microsoft Technology Licensing LLC

Technology is disclosed for automatically generating a facial avatar resembling a user in a defined art style. One or more processors generate a user 3D head model for the user based on captured 3D image data from a communicatively coupled 3D image capture device. A set of user transferable head features from the user 3D head model are automatically represented by the one or more processors in the facial avatar in accordance with rules governing transferable user 3D head features. In some embodiments, a base or reference head model of the avatar is remapped to include the set of user head features. In other embodiments, an avatar head shape model is selected based on the user 3D head model, and the transferable user 3D head features are represented in the avatar head shape model.

Подробнее
09-02-2017 дата публикации

INTERFACE FOR PLANNING FLIGHT PATH

Номер: US20170039764A1
Автор: Hu Botao, Zhang Jiajie
Принадлежит:

A flight path of a physical aircraft vehicle is planned. A virtual three-dimensional model of a physical environment is provided. A placement indicator is tracked within the virtual three-dimensional model of the physical environment. Tracking the placement indicator includes tracking a location and an orientation of the placement indicator within the virtual three-dimensional model. A viewfinder display window that displays a simulated image captured from a simulated camera of a simulated vehicle located at the location of the placement indicator and oriented at a direction of the orientation of the placement indicator is provided. For the physical aircraft vehicle, at least a flight path and a camera image capture are planned using the placement indicator and the viewfinder display window within the virtual three-dimensional model. 1. A system for planning a flight path of a physical aircraft vehicle , comprising: provide a virtual three-dimensional model of a physical environment;', 'track a placement indicator within the virtual three-dimensional model of the physical environment, wherein tracking the placement indicator includes tracking a location and an orientation of the placement indicator within the virtual three-dimensional model;', 'provide a viewfinder display window that displays a simulated image captured from a simulated camera of a simulated vehicle located at the location of the placement indicator and oriented at a direction of the orientation of the placement indicator; and', 'plan for the physical aircraft vehicle at least the flight path and a camera image capture using the placement indicator and the viewfinder display window within the virtual three-dimensional model; and, 'a processor configured toa memory coupled to the processor and configured to provide the processor with instructions.2. The system of claim 1 , wherein the virtual three-dimensional model is provided via a virtual reality headset.3. The system of claim 1 , wherein the ...

Подробнее
09-02-2017 дата публикации

SYSTEM AND METHOD FOR REAL-TIME OVERLAY OF MAP FEATURES ONTO A VIDEO FEED

Номер: US20170039765A1
Принадлежит:

A method is provided for augmenting video feed obtained by a camera of a aerial vehicle to a user interface. The method can include obtaining a sequence of video images with or without corresponding sensor metadata from the aerial vehicle; obtaining supplemental data based on the sequence of video images and the sensor metadata; correcting an error in the sensor metadata using a reconstruction error minimization technique; creating a geographically-referenced scene model based on a virtual sensor coordinate system that is registered to the sequence of video images; overlaying the supplemental information onto the geographically-referenced scene model by rendering geo-registered data from a 3D perspective that matches a corrected camera model; creating a video stream of a virtual representation from the scene from the perspective of the camera based on the overlaying; and providing the video stream to a UI to be render onto a display. 1. A method for providing an augmented video feed obtained by a camera of a manned or unmanned aerial vehicle (“UAV”) to a user interface (“UI”) , the method comprising:obtaining a sequence of video images with or without corresponding sensor metadata from the aerial vehicle;obtaining supplemental data based on the sequence of video images and the sensor metadata;correcting, by a processor, an error in the sensor metadata using a reconstruction error minimization technique;creating, by a processor, a geographically-referenced scene model based on a virtual sensor coordinate system that is registered to the sequence of video images;overlaying the supplemental information onto the geographically-referenced scene model by rendering geo-registered data from a 3D perspective that matches a corrected camera model;creating a video stream of a virtual representation from the scene from the perspective of the camera based on the overlaying; andproviding the video stream to a UI to be render onto a display.2. The method of claim 1 , wherein the ...

Подробнее
08-02-2018 дата публикации

GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT

Номер: US20180039848A1
Принадлежит:

Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object. 1. A method , comprising:receiving vision sensor data generated by a vision sensor associated with a robot, the vision sensor data capturing an object in an environment of the robot;generating an object model of the object based on the vision sensor data; 'rendering a first image that renders the object model and that includes first additional content and rendering a second image that renders the object model and that includes second additional content that is distinct from the first additional content;', 'generating a plurality of rendered images based on the object model, wherein the rendered images capture the object model at a plurality of different poses relative to viewpoints of the rendered images, and wherein generating the rendered images based on the object model comprisesgenerating training examples that each include a corresponding one of the rendered images as training example input and that each include an indication of the object as training example output; andtraining a machine learning model based on the training examples.2. The method of claim 1 , wherein rendering the first image with the first additional content comprises rendering the object model onto a first background claim 1 , and wherein rendering the second image with second additional content comprises rendering the object model onto a second background that is distinct from the first background.3. The method of claim 2 , further ...

Подробнее
12-02-2015 дата публикации

Automatic Planning For Medical Imaging

Номер: US20150043774A1
Принадлежит:

Disclosed herein is a framework for facilitating automatic planning for medical imaging. In accordance with one aspect, the framework receives first image data of a subject. One or more imaging parameters may then be derived using a geometric model and at least one reference anatomical primitive detected in the first image data. The geometric model defines a geometric relationship between the detected reference anatomical primitive and the one or more imaging parameters. The one or more imaging parameters may be presented, via a user interface, for use in acquisition, reconstruction or processing of second image data of the subject. 1. A non-transitory computer-readable medium embodying a program of instructions executable by machine to perform steps for medical imaging planning , the steps comprising:(i) learning hierarchical detectors based on training image data;(ii) detecting reference anatomical primitives in first image data of a subject by applying the learned hierarchical detectors;(iii) deriving one or more imaging parameters based on a geometric model, wherein the geometric model defines a geometric relationship between at least one of the detected reference anatomical primitives and the one or more imaging parameters; and(iv) presenting, via a user interface, the one or more imaging parameters for use in acquisition, reconstruction or processing of second image data of the subject.2. The non-transitory computer-readable medium of claim 1 , wherein the program of instructions is further executable by the machine to learn the hierarchical detectors by learning a bone detector claim 1 , andlearning a spatial relationship model that captures a geometric relationship between a bone structure and a vessel structure or landmark.3. A computer-implemented method of medical imaging planning claim 1 , comprising:(i) receiving first image data of a subject;(ii) automatically deriving, by a processor, one or more imaging parameters by using a geometric model and at ...

Подробнее
16-02-2017 дата публикации

ENABLING USE OF THREE-DIMENSIONAL LOCATIONS OF FEATURES WITH TWO-DIMENSIONAL IMAGES

Номер: US20170046844A1
Принадлежит:

The disclosed embodiments provide a system that facilitates use of an image. During operation, the system uses a set of images from a camera on a device to obtain a set of features in proximity to the device, wherein the set of images comprises the image. Next, the system uses the set of images and inertial data from one or more inertial sensors on the device to obtain a set of three-dimensional (3D) locations of the features. Finally, the system enables use of the set of 3D locations with the image. 1. A computer-implemented method for facilitating use of an image , comprising:obtaining an image comprising a set of features, wherein the image is captured by a camera in a device during a proximity of the device to the set of features;obtaining, from metadata for the image, a set of three-dimensional (3D) locations of the features and a set of two-dimensional (2D) locations of the features, wherein the 3D locations and the 2D locations form a subset of a 3D model generated using the image and inertial data from one or more inertial sensors on the device;using the set of 3D locations and the set of 2D locations to calculate, by a computer system, a measurement of an attribute associated with one or more points in the image; anddisplaying the measurement in a user interface of the computer system.2. The computer-implemented method of claim 1 , further comprising:displaying, in the user interface, the 2D locations of the features in the image.3. The computer-implemented method of claim 2 , wherein displaying the 2D locations of the features in the image comprises:displaying graphical objects identifying the features over the 2D locations of the features in the image.4. The computer-implemented method of claim 1 , further comprising:obtaining, through the user interface, a selection of the one or more points in the image prior to calculating the attribute.5. The computer-implemented method of claim 4 , wherein the selection of the one or more points in the image is ...

Подробнее
22-02-2018 дата публикации

Vision System with Teat Detection

Номер: US20180049389A1
Принадлежит:

A system that includes a laser configured to generate a profile signal of at least a portion of a dairy livestock, a memory operable to store a teat detection rule set, and a processor. The processor is configured to obtain the profile signal and detect one or more edge pair candidates in the profile signal, compare the complementary distance gradients of each of the one or more edge pair candidates to a minimum distance gradient length to be considered an edge pair, and identify one or more edge pairs from among the one or more edge pair candidates based on the comparison. The processor is further configured to apply the teat detection rule set to the one or more edge pairs to identify one or more teat candidates from among the one or more edge pairs and determine position information for the one or more teat candidates. 1. A vision system comprising: information associated with a relative distance between the dairy livestock and the laser;', 'one or more rising distance gradients indicating an increase in the distance between the dairy livestock and the laser; and', 'one or more falling distance gradients indicating a decrease in the distance between the dairy livestock and the laser;, 'a laser configured to generate a profile signal of at least a portion of a dairy livestock, wherein the profile signal comprisesa memory operable to store a teat detection rule set; and obtain the profile signal;', 'detect one or more edge pair candidates in the profile signal, wherein each of the one or more edge pair candidates comprises complementary distance gradients comprising a rising distance gradient and a falling distance gradient;', 'compare the complementary distance gradients of each of the one or more edge pair candidates to a minimum distance gradient length to be considered an edge pair;', 'identify one or more edge pairs from among the one or more edge pair candidates based on the comparison, wherein each of the one or more edge pairs comprises complementary ...

Подробнее
22-02-2018 дата публикации

METHOD OF USING SOFT POINT FEATURES TO PREDICT BREATHING CYCLES AND IMPROVE END REGISTRATION

Номер: US20180049808A1
Автор: Krimsky William S.
Принадлежит:

A method of registering an area of interest luminal network to images of the area of interest luminal network comprising. The method includes generating a model of the area of interest based on images of the area of interest, determining a location of a soft point in the area of interest, tracking a location of the location sensor while the location sensor is navigated within the area of interest, comparing the tracked locations of the location sensor within the area of interest , navigating the location sensor to the soft point, confirming the location sensor is located at the soft point, and updating the registration of the model with the area of interest based on the tracked locations of the location sensor at the soft point. 1. A method of registering an area of interest to images of the area of interest comprising:generating a model of the area of interest based on images of the area of interest;determining a location of a soft point in the area of interest;tracking a location of the location sensor while the location sensor is navigated to the area of interest;comparing the tracked locations of the location sensor within the area of interest;navigating the location sensor to the soft point;confirming the location sensor is located at the soft point; andupdating the registration of the model with the area of interest based on the tracked locations of the location sensor at the soft point.2. The method of claim 1 , further comprising:displaying guidance for navigating a location sensor within the area of interest.3. The method of claim 1 , further comprising claim 1 ,generating an electromagnetic field about the area of interest; andinserting the location sensor into the electromagnetic field,wherein the location sensor includes magnetic field sensors configured to sense the magnetic field and to generate position signals in response to the sensed magnetic field.4. The method of claim 1 , wherein confirming the location sensor is located at the soft point ...

Подробнее
26-02-2015 дата публикации

ACCURACY COMPENSATION METHOD, SYSTEM, AND DEVICE

Номер: US20150055852A1
Принадлежит:

A method for applying accuracy compensation to a computer numerically controlled (CNC) machine can compensate control program that controls the CNC machine. The method recognizes an actual outline of the product using an image of product produced by the CNC machine controlled by the control program, and further obtains an ideal outline of the product. The method obtains compensation values by computing coordinate differences between points of the actual outline and points on the ideal outline, and compensates the control program using the compensation values. 1. An accuracy compensation method executable by at least one processor of a computing device , the method comprising:obtaining an image of a product;recognizing an actual outline of the product according to the image of the product, and determining coordinates of points of the actual outline;obtaining an ideal outline of the product;aligning the actual outline and the ideal outline of the product, and computing minimum distances from each of the points of the actual outline to the ideal outline;determining points on the ideal outline, each of which corresponds to one of the points on the actual outline, according to the minimum distances, and obtaining coordinates of the points on the ideal outline;computing coordinate differences between each of the points of the actual outline and each of the corresponding points on the ideal outline;assigning compensation values of the points on the ideal outline based on the coordinate differences;obtaining a control program that controls a computer numerically controlled (CNC) machine to produce the product, and obtaining a measurement path of a cutting tool of the CNC machine according to the control program;determining points on the measurement path corresponding to the points on the ideal outline;generating a compensated measurement path by compensating the points on the measurement path using the compensation values; andcorrecting the control program according to the ...

Подробнее
23-02-2017 дата публикации

ARRANGEMENT DETECTION APPARATUS AND PICKUP APPARATUS

Номер: US20170053410A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to an embodiment, an arrangement detection apparatus includes a measuring unit, an extractor, a generator, a first calculator. The measuring unit measures surfaces of polyhedrons, the polyhedrons being identical in shape and arranged in contact with each other. The extractor extracts a surface region from the surfaces, the surface region having a maximal area and being closest to the measuring unit. The generator generates outlines of at least one of desired surfaces of the polyhedrons included in the surface region. The first calculator calculates position information on the polyhedrons included in the surface region, utilizing the outlines. 1. An arrangement detection apparatus comprising:a measuring unit that measures surfaces of polyhedrons, the polyhedrons being identical in shape and arranged in contact with each other;an extractor that extracts a surface region from the surfaces, the surface region having a maximal area and being closest to the measuring unit;a generator that generates outlines of at least one of desired surfaces of the polyhedrons included in the surface region; anda first calculator that calculates position information on the polyhedrons included in the surface region, utilizing the outlines.2. The apparatus according to claim 1 , further comprising a second calculator that calculates a center of gravity and a normal line of the surface region claim 1 , whereinthe generator acquires the outlines from the surface region when viewed from a point on the normal line extending from the center of gravity.3. The apparatus according to claim 1 , whereinthe measuring unit obtains a set of coordinates of any points on the surfaces.4. The apparatus according to claim 1 , whereinthe generator generates first data representing each of the outlines, when surface models are laid on a plane model and an area of the plane model not overlaid with the surface models is minimal, the surface models representing the outlines, the plane model ...

Подробнее
01-03-2018 дата публикации

Systems and methods for providing proximity awareness to pleural boundaries, vascular structures, and other critical intra-thoracic structures during electromagnetic navigation bronchoscopy

Номер: US20180055575A1
Автор: William S. Krimsky
Принадлежит: COVIDIEN LP

Disclosed are systems, devices and methods for providing proximity awareness to an anatomical feature while navigating inside a patient's chest, an exemplary method including receiving image data of the patient's chest, generating a three-dimensional (3D) model of the patient's chest based on the received image data, determining a location of the anatomical feature based on the received image data and the generated 3D model, tracking a position of an electromagnetic sensor included in a tool, iteratively determining a position of the tool inside the patient's chest based on the tracked position of the electromagnetic sensor, and indicating a proximity of the tool relative to the anatomical feature, based on the determined position of the tool inside the patient's chest.

Подробнее
02-03-2017 дата публикации

Method for focusing a high-energy beam on a reference point on the surface of a flying object in flight

Номер: US20170059282A1
Автор: Wolfgang Schlosser
Принадлежит: MBDA Deutschland GmbH

A method for focusing a beam of a high energy radiation source on a reference point on the surface of a flying object, comprising: recording a number of consecutive two-dimensional images of the flying object determining the trajectory of the flight path simultaneously determining the line of sight angle between the image acquisition device and the position of the flying object calculating a three-dimensional model of the flying object displaying the currently acquired two-dimensional image marking the reference point on the displayed two-dimensional image of the flying object; calculating the three-dimensional reference point on the surface of the flying object focusing the beam of the high energy radiation source on the three-dimensional reference point.

Подробнее
03-03-2016 дата публикации

METHOD AND APPARATUS FOR EYE GAZE TRACKING

Номер: US20160063303A1
Принадлежит:

The invention relates to method and apparatus of an eye gaze tracking system. In particular, the present invention relates to method and apparatus of an eye gaze tracking system using a generic camera under normal environment, featuring low cost and simple operation. The present invention also relates to method and apparatus of an accurate eye gaze tracking system that can tolerate large illumination changes. 1. An eye gaze tracking method implemented using at least one image capturing device and at least one computing processor comprising the steps of:detecting a user's iris and eye corner position associated with at least one eye iris center and at least one eye corner of the user to determine an eye vector associated with the user's gaze direction; andprocessing the eye vector for application of a head pose estimation model arranged to model a head pose of the user so as to devise one or more final gaze points of the user.2. An eye gaze tracking method in accordance with claim 1 , wherein the step of detecting the user's iris and eye corner position includes the steps of:detecting and extracting at least one eye region from at least one captured image of the user; anddetecting and extracting the at least one eye iris center and the corresponding at least one eye corner from the at least one eye region to determine at least one eye vector.3. An eye gaze tracking method in accordance with claim 2 , further comprising the step of: determining at least one initial gaze point of the user for application with the head pose estimation model by mapping the at least one eye vector to at least one gaze target.4. An eye gaze tracking method in accordance with claim 3 , wherein the step of: processing the eye vector with the head pose estimation model includes the step of applying the at least one initial gaze point of the user to the head pose estimation model to devise the at least one corresponding final gaze point of the user.5. The method according to wherein the step ...

Подробнее
03-03-2016 дата публикации

REAL-TIME SUBJECT-DRIVEN FUNCTIONAL CONNECTIVITY ANALYSIS

Номер: US20160063701A1
Автор: Chen Jingyun
Принадлежит:

A method and associated systems for real-time subject-driven functional connectivity analysis. One or more processors receive an fMRI time series of sequentially recorded, masked, parcellated images that each represent the state of a subject's brain at the image's recording time as voxels partitioned into a constant set of three-dimensional regions of interest. The processors derive an average intensity of each region's voxels in each image and organize these intensity values into a set of time courses, where each time course contains a chronologically ordered list of average intensity values of one region. The processors then identify time-based correlations between average intensities of each pair of regions and represent these correlations in a graphical format. As each subsequent fMRI image of the same subject's brain arrives, the processors repeat this process to update the time courses, correlations, and graphical representation in real time or near-real time. 1. A method for real-time subject-driven functional connectivity analysis , the method comprising: wherein the time series comprises a sequence of brain volumes recorded during a first time period,', "wherein each brain volume of the time series represents a same subject's brain as a three-dimensional set of voxels,", 'wherein each brain volume of the time series was recorded at a unique recording time of a set of recording times,', 'wherein the first brain volume was recorded at an earliest recording time of the set of recording times,', 'wherein a parcellation of the first brain volume identifies a set of three-dimensional regions common to each brain volume of the time series; and', "wherein a voxel of the first brain volume is characterized by an intensity that represents a level of activation at a location within the same subject's brain at a time at which the first brain volume was recorded;"], 'a processor of a computer system receiving a first brain volume of a time series,'}the processor further ...

Подробнее
01-03-2018 дата публикации

Machine creation of program with frame analysis method and apparatus

Номер: US20180061107A1
Принадлежит: Intel Corp

Methods, apparatus, and systems to create, output, and use animation programs comprising keyframes, objects, object states, and programming elements. Objects, object states, and programming elements may be created through image analysis of image input. Animation programs may be output as videos, as non-linear interactive experiences, and/or may be used to control electronic actuators in articulated armatures.

Подробнее
01-03-2018 дата публикации

COLLECTIVE NAVIGATION FOR VIRTUAL REALITY DEVICES

Номер: US20180061125A1
Автор: Rao Lei, XIA Yinglong
Принадлежит:

A computer-executed method is disclosed for collective navigation of distributed virtual reality (VR) devices. The method obtains a source vertex and a destination vertex for a VR device. The source vertex and the destination vertex include vertices of a graph model of a navigable space having a plurality of vertices. The vertices represent a point within the navigable space and the plurality of edges represent a path segment between two corresponding vertices. A subset of possible vertices, selected from the plurality of vertices, is determined for a navigable path. A vertex traffic potential is determined for each vertex of the subset of possible vertices. The navigable path, including one or more consecutive path segments selected to minimize both segment path lengths and vertex traffic potentials, is determined from the source vertex to the destination vertex. 1. A computer-executed method for collective navigation for distributed virtual reality (VR) devices , the method comprising:obtaining a source vertex and a destination vertex for a VR device, with the source vertex and the destination vertex comprising vertices of a graph model of a navigable space, the graph model comprising a plurality of vertices and a plurality of edges, with a vertex representing a point within the navigable space and with an edge representing a path segment between two corresponding vertices;determining a subset of possible vertices for a navigable path, with the subset of possible vertices being selected from the plurality of vertices;computing a vertex traffic potential for each vertex of the subset of possible vertices; anddetermining the navigable path from the source vertex to the destination vertex comprising selecting one or more consecutive path segments to minimize segment path lengths and to minimize vertex traffic potentials.2. The method of claim 1 , further comprising providing navigation instructions to the VR device claim 1 , with the navigation instructions based on ...

Подробнее
02-03-2017 дата публикации

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Номер: US20170061631A1
Принадлежит: FUJITSU LIMITED

An image processing method includes acquiring a three-dimensional model obtained by modeling a plurality of objects included in a work space, acquiring, from a camera which is hold by a user, an image captured by the camera, the user existing in the work space, acquiring, from a sensor which is hold by the user, distance information indicating distances between the sensor and each of the plurality of objects, determining a position of the user in the work space based on the three-dimensional model and the distance information, identifying a predetermined region closest to the position of the user among at least one of predetermined regions defined in the three-dimensional model, generating a display screen displaying the predetermined region and the image, and outputting the display screen to another computer. 1. An image processing method executed by a computer , the image processing method comprising:acquiring a three-dimensional model obtained by modeling a plurality of objects included in a work space;acquiring, from a camera which is hold by a user, an image captured by the camera, the user existing in the work space;acquiring, from a sensor which is hold by the user, distance information indicating distances between the sensor and each of the plurality of objects;determining a position of the user in the work space based on the three-dimensional model and the distance information;identifying a predetermined region closest to the position of the user among at least one of predetermined regions defined in the three-dimensional model;generating a display screen displaying the predetermined region and the image; andoutputting the display screen to another computer.2. The image processing method according to claim 1 , further comprising:identifying a hand region of the user based on the distance information and the three-dimensional model; andidentifying an object to become a work target for the user among the plurality of objects defined in the three-dimensional ...

Подробнее
02-03-2017 дата публикации

PRODUCING THREE-DIMENSIONAL REPRESENTATION BASED ON IMAGES OF A PERSON

Номер: US20170064284A1
Принадлежит:

An example method of generating three-dimensional visual objects representing a person based on two-dimensional images of at least a part of the person's body may include receiving a first polygonal mesh representing a human body part, wherein the first polygonal mesh is compliant with a target application topology. The example method may further include receiving a second polygonal mesh representing the human body part, wherein the second polygonal mesh is derived from a plurality of images of a person. The example method may further include modifying at least one of the first polygonal mesh or the second polygonal mesh to optimize a value of a metric reflecting a difference between the first polygonal mesh and the second polygonal mesh. 1. A method , comprising:receiving, by a processing device, a first polygonal mesh representing a human body part, wherein the first polygonal mesh is compliant with a target application topology;receiving a second polygonal mesh representing the human body part, wherein the second polygonal mesh is derived from a plurality of images of a person; andmodifying at least one of the first polygonal mesh or the second polygonal mesh to optimize a value of a metric reflecting a difference between the first polygonal mesh and the second polygonal mesh.2. The method of claim 1 , further comprising:receiving the plurality of images from a mobile computing device.3. The method of claim 1 , further comprising:receiving the plurality of images from a specialized computing device equipped with a still image camera and a light source, wherein a lens of the still image camera and the light source are equipped with cross-polarizing filters.4. The method of claim 1 , wherein the part of the human body comprises a head.5. The method of claim 1 , wherein the metric reflects at least one of: a difference between a first curvature of the first polygonal mesh and a second curvature of the second polygonal mesh claim 1 , a difference between a first ...

Подробнее
12-03-2015 дата публикации

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

Номер: US20150070385A1
Принадлежит:

A tomogram of an object is acquired. A place in a tomogram which corresponds to a portion spaced apart from a reference point in the object by a predetermined distance is specified. A composite image is generated by combining the tomogram with information indicating the specified place. The composite image is output. 1. An information processing apparatus comprising:a unit configured to acquire a tomogram of an object; anda generation unit configured to specify a place in the tomogram which corresponds to a portion spaced apart from a reference point in the object by a predetermined distance, generate a composite image by combining the tomogram with information indicating the specified place, and output the composite image.2. The apparatus according to claim 1 , wherein said generation unit obtains a circle by cutting a sphere centered on a position of the reference point and having the predetermined distance as a radius along an imaging slice of the object which corresponds to the tomogram claim 1 , and obtains an arc claim 1 , of the obtained circle claim 1 , which is included in the tomogram as the place.3. The apparatus according to claim 2 , wherein said generation unit generates a composite image by superimposing the arc on the tomogram and outputs the composite image.4. The apparatus according to claim 2 , wherein said generation unit generates a composite image by superimposing a character indicating the predetermined distance on the tomogram and outputs the composite image.5. The apparatus according to claim 1 , wherein the predetermined distance comprises a reference distance and a plurality of distances different from the reference distance claim 1 , andsaid generation unit specifies places in the tomogram which correspond to portions spaced apart from the reference point by the respective distances included in the predetermined distance, generates a composite image by combining the tomogram with pieces of information indicating the places specified in ...

Подробнее
12-03-2015 дата публикации

Method And Device For Optically Determining A Position And/Or Orientation Of An Object In Space

Номер: US20150071491A1
Принадлежит:

The invention relates to a method for optically determining the position and/or orientation of an object in space on the basis of images from at least one camera (). At least one 2D image of the object is recorded; 2D contour information is extracted from the image. On the basis of the contour information, at least one area () of the object is determined. Furthermore, 3D information is obtained from the at least one 2D image or a further 2D image. A portion situated within the determined area () of the object is selected from this 3D information. The position and/or orientation of the object is then determined on the basis of the selected portion of the 3D information. The invention furthermore relates to a use of the method, a device for carrying out the method and a milking robot with such a device. 1. A method for optically determining the position and/or orientation of an object in space on the basis of at least one images from at least one camera , the method comprising the steps of:recording at least one 2D image of the object and extracting 2D contour information from the image;calculating a model contour line depending on the 2D contour information;determining, on the basis of the model contour line, at least one area of the object;generating 3D information from the at least one 2D imageselecting at least a portion of the 3D information that is situated within the determined area of the object anddetermining a position and/or orientation of the object on the basis of the selected portion of the 3D information, wherein a surface contour of the object is defined by a body of revolution of the model contour line, and wherein the position and/or the orientation of the body of revolution in space is varied such that the measured 3D information from the selected area is situated closely to a lateral surface of the body of revolution.2. The method of claim 1 , in which the 2D contour information is extracted from the at least one 2D image as a contour line.3. The ...

Подробнее
10-03-2016 дата публикации

APPARATUS AND METHOD FOR PARAMETERIZING A PLANT

Номер: US20160071257A1
Принадлежит:

An apparatus for parameterizing a plant includes a recorder for recording a three-dimensional data set of the plant including not only volume elements of non-covered elements of the plant, but also volume elements of elements of the plants that are covered by other elements, and a parameterizer for parameterizing the three-dimensional data set for obtaining plant parameters. 1. An apparatus for parameterizing a plant , comprising:a recorder for recording a three-dimensional data set of the plant, which does not only comprise volume elements of non-covered elements of the plant, but also volume elements of elements of the plant that are covered by other elements;a parameterizer for parameterizing the three-dimensional data set for acquiring plant parameters,wherein the parameterizer is implemented to convert the three-dimensional data set into a point cloud, wherein the point cloud only comprises points on a surface of the plant or points of a volume structure of the plant,wherein the parameterizer is further implemented to segment the three-dimensional point cloud into single elements of the plant, wherein a single element is selected from the group consisting of a leaf, a stem, a branch, a trunk, a blossom, a fruit skeleton, and a leaf skeleton, andwherein the parameterizer is implemented to calculate, by using a single-element model, parameters for the single element by adapting the single-element model to the single element.2. The apparatus according to claim 1 , wherein the recorder is implemented to perform an X-ray computer tomography method or a magnetic resonance tomography method for acquiring the three-dimensional data set claim 1 , wherein a volume element of the three-dimensional data set comprises a three-dimensional coordinate and at least one intensity value.3. The apparatus according to claim 1 , wherein the parameterizer is implemented to represent the points of the point cloud only by a coordinate derived from the coordinate of a respective volume ...

Подробнее
10-03-2016 дата публикации

Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction

Номер: US20160071318A1
Автор: Lee Ken, Yin Jun
Принадлежит:

Methods and systems are described for generating a three-dimensional (3D) model of an object represented in a scene. A computing device receives a plurality of images captured by a sensor, each image depicting a scene containing physical objects and at least one object moving and/or rotating. The computing device generates a scan of each image comprising a point cloud corresponding to the scene and objects. The computing device removes one or more flat surfaces from each point cloud and crops one or more outlier points from the point cloud after the flat surfaces are removed using a determined boundary of the object to generate a filtered point cloud of the object. The computing device generates an updated 3D model of the object based upon the filtered point cloud and an in-process 3D model, and updates the determined boundary of the object based upon the filtered point cloud. 1. A computerized method for generating a three-dimensional (3D) model of an object represented in a scene , the method comprising:receiving, by an image processing module executing on a processor of a computing device, a plurality of images captured by a sensor coupled to the computing device, each image depicting a scene containing one or more physical objects, wherein at least one of the objects moves and/or rotates between capture of different images;generating, by the image processing module, a scan of each image comprising a 3D point cloud corresponding to the scene and objects;removing, by the image processing module, one or more flat surfaces from each 3D point cloud and cropping one or more outlier points from the 3D point cloud after the flat surfaces are removed using a determined boundary of the object to generate a filtered 3D point cloud of the object;generating, by the image processing module, an updated 3D model of the object based upon the filtered 3D point cloud and an in-process 3D model; andupdating, by the image processing module, the determined boundary of the object ...

Подробнее
09-03-2017 дата публикации

SYSTEM AND METHOD FOR PROVIDING USER INTERFACE TOOLS

Номер: US20170068323A1
Принадлежит:

A system includes one or more hardware processors, a head mounted display configured to display a virtual environment to a user, an input device, and a virtual mini-board module. The mini-board module is configured to render the virtual environment for presentation to the user via the HMD, the virtual environment is rendered from a first perspective providing a field of view of the virtual environment to the user, provide a virtual mini-board to the user within the field of view, the virtual mini-board displaying a region of the virtual environment, detect an interaction event performed by the user on the virtual mini-board, identify the first object based on the interaction event performed on the virtual mini-board, and perform the interaction event on the first object within the virtual environment based on the interaction event performed on the virtual mini-board. 1. A system comprising:one or more hardware processors;a head mounted display (HMD) configured to display a virtual environment to a user wearing the HMD;an input device configured to allow the user to interact with virtual objects presented in the virtual environment; and rendering the virtual environment for presentation to the user via the HMD, the virtual environment being rendered from a first perspective that provides a field of view of the virtual environment to the user;', 'providing a virtual mini-board to the user within the field of view, the virtual mini-board representing a region of the virtual environment;', 'detecting an interaction event performed by the user on the virtual mini-board, the interaction event being performed by the user using the input device;', 'identifying a first object within the virtual environment based on the interaction event performed on the virtual mini-board; and', 'performing the interaction event on the first object within the virtual environment based on the interaction event performed on the virtual mini-board., 'a virtual mini-board module, executable by ...

Подробнее
09-03-2017 дата публикации

THREE-DIMENSIONAL ANNOTATIONS FOR STREET VIEW DATA

Номер: US20170069121A1
Принадлежит:

The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location. 1. A method , comprising:receiving, by one or more computing devices, content to be associated with a selected portion of a first street-level, panoramic, photographic image;extending, by the one or more computing devices, at least one ray from a determined focal point of the first image through a point of the selected portion to project the selected portion onto a projected portion of a three-dimensional model, at least some of the three-dimensional model representing content of the first image;determining, by the one or more computing devices, a location in three-dimensional space of the projected portion in the three-dimensional model;associating, by the one or more computing devices, the content with the location of the projected portion;storing, by the one or more computing devices, the content with the location to form an annotation; andgenerating, by the one or more computing devices, the annotation for display in a second street-level, panoramic, photographic image at a position corresponding to the location.2. The method of claim 1 , further comprising generating claim 1 , by the one or more computing devices ...

Подробнее
19-03-2015 дата публикации

METHOD AND SYSTEM FOR DETERMINING A RELATION BETWEEN A FIRST SCENE AND A SECOND SCENE

Номер: US20150078652A1
Принадлежит: SAAB AB

The present invention relates to a system () and method for determining a relation between a first scene and a second scene. The method comprises the steps of generating at least one sensor image of a first scene with at least one sensor; accessing information related to at least one second scene, said second scene encompassing said first scene, and matching the sensor image with the second scene to map the sensor image onto the second scene. The step of accessing information related to the at least one second scene comprises accessing a 3D map comprising geocoded 3D coordinate data. The mapping involves associating geocoding information to a plurality of positions in the sensor image based on the coordinate data of the second scene. 134-. (canceled)35400500. Method (; ) for determining a relation between a first scene and a second scene , said method comprising the steps of{'b': '401', 'generating () at least one sensor image of a first scene with at least one sensor;'}{'b': '402', 'accessing () information related to at least one second scene, said second scene encompassing said first scene; and'}{'b': '403', 'matching () the sensor image with the second scene to map the sensor image onto the second scene,'} [{'b': '402', 'the step of accessing () information related to the at least one second scene comprises accessing a 3D map comprising geocoded 3D coordinate data; and'}, 'the mapping involves associating geocoding information to a plurality of positions in the sensor image is based on the coordinate data of the second scene., 'wherein36. Method according to claim 35 , wherein the 3D map comprises a 3D model of the environment comprising 3D coordinate data given in a geo-referenced coordinate system.37. Method according to claim 36 , wherein the 3D model is textured.38. Method according to claim 37 , wherein at least some of the 3D coordinate data is associated to texture information.39406. Method according to claim 35 , further comprising a step of extracting ...

Подробнее
16-03-2017 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20170076638A1
Автор: Takeuchi Yuichiro
Принадлежит: SONY CORPORATION

A method is provided for displaying physical objects. The method comprises capturing an input image of physical objects, and matching a three-dimensional model to the physical objects. The method further comprises producing a modified partial image by at least one of modifying a portion of the matched three-dimensional model, or modifying a partial image extracted from the input image using the matched three-dimensional model. The method also comprises displaying an output image including the modified partial image superimposed over the input image. 1. An image processing apparatus , comprising: acquire an input image of a real world;', 'acquire a measurement result associated with a position and a posture of a portable device;', 'acquire a three-dimensional model, wherein the three-dimensional model indicates a shape and a location of at least one object of interest of a plurality of objects within the real world by a plurality of feature points;', 'generate at least one three-dimensional virtual object as a counterpart the at least one object of interest of the plurality of objects within the real world, wherein a shape and a location of the at least one three-dimensional virtual object is based on the shape and the location of each object of interest in the three-dimensional model; and', 'cause a display to display the at least one three-dimensional virtual object onto the at least one object of interest so as to emphasize the at least one object of interest., 'circuitry configured to2. The image processing apparatus according to claim 1 ,wherein the circuitry is further configured to extract the plurality of feature points for each object of interest of the plurality of objects.3. The image processing apparatus according to claim 2 ,wherein the circuitry is further configured to match at least one feature point of the plurality of extracted feature points for each object with one or more of the shape and the location of the at least one generated three- ...

Подробнее
26-03-2015 дата публикации

METHODS AND SYSTEMS FOR EFFICIENTLY MONITORING PARKING OCCUPANCY

Номер: US20150086071A1
Принадлежит: XEROX CORPORATION

A system and method for determining parking occupancy by constructing a parking area model based on a parking area, receiving image frames from at least one video camera, selecting at least one region of interest from the image frames, performing vehicle detection on the region(s) of interest, determining that there is a change in parking status for a parking space model associated with the region of interest, and updating parking status information for a parking space associated with the parking space model. 1. A system for determining parking occupancy , the system comprising:one or more video cameras;a processing system comprising one or more processors capable of receiving data from the one or more video cameras; and constructing a parking area model based on a parking area, wherein the parking area model comprises one or more parking space models, each associated with a parking space in the parking area;', 'receiving a set of image frames for the parking area from the one or more video cameras;', 'selecting a region of interest within an image frame from the set of image frames;', 'performing vehicle detection on the region of interest within the image frame;', 'determining that there is a change in parking status for a parking space model associated with the region of interest; and', 'updating parking status information for a parking space associated with the parking space model based on determining that there is a change in parking status., 'a memory system comprising one or more computer-readable media, wherein the one or more computer-readable media contain instructions that, when executed by the processing system, cause the processing system to perform operations comprising2. The system of claim 1 , wherein the one or more parking space models are three-dimensional volumetric models.3. The system of claim 2 , wherein constructing the three-dimensional volumetric models comprises:receiving a preliminary set of image frames for the parking area from the one ...

Подробнее
26-03-2015 дата публикации

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20150086121A1
Автор: MORISHITA Yusuke
Принадлежит:

An information processing device and method can detect facial feature point positions with high accuracy. The information processing device includes: a facial image input unit receiving a facial image; a facial feature point reliability generation unit generating, based on a plurality of classifiers for computing the suitability of feature points of a face, a reliability image indicating the suitability for each feature point from the facial image; a facial feature point candidate position computation unit obtaining a candidate position of the feature point in the facial image based on the reliability image; and a facial shape model conformity computation unit determining a feature point that satisfies conditions based on a position corresponding to each of the feature points of one facial shape model selected from a plurality of statistically generated facial shape models and the candidate position of the feature point, and calculating a conformity to the facial shape model. 1. An information processing device comprising:input means for receiving a facial image;reliability calculation means for generating, based on a plurality of classifiers for computing the suitability of feature points of a face, a reliability image indicating the suitability for each feature point from the facial image;candidate position calculation means for obtaining a candidate position of the feature point in the facial image based on the reliability image; andconformity calculation means for determining a feature point that satisfies conditions based on a position corresponding to each of the feature points of one facial shape model selected from a plurality of statistically generated facial shape models and the candidate position of the feature point calculated by the candidate position calculation means, and for calculating a conformity to the facial shape model.2. The information processing device according to claim 1 , whereinposition information of a feature point of which the ...

Подробнее
24-03-2016 дата публикации

BUILD ORIENTATIONS FOR ADDITIVE MANUFACTURING

Номер: US20160085882A1
Принадлежит:

Methods for product data management and corresponding systems and computer-readable mediums. A method includes receiving a solid model. The method includes analyzing the solid model to determine a suggested orientation that minimizes a build height or minimizes a support volume. The method includes displaying and saving the suggested orientation. 1. A method for product data management , the method performed by a data processing system and comprising:receiving a solid model;analyzing the solid model to determine a suggested orientation that minimizes a build height or minimizes a support volume; anddisplaying and saving the suggested orientation.2. The method of claim 1 , wherein the data processing system also applies the orientations during a manufacturing process.3. The method of claim 1 , wherein the data processing system also generates candidate orientations.4. The method of claim 3 , wherein the data processing system also computes a support volume for a support structure for each candidate orientation.5. The method of claim 4 , wherein the data processing system compares and sorts the support volumes for the candidate orientations and uses the candidate orientation with the least support volume as the suggested orientation that minimizes support volume.6. The method of claim 1 , wherein the displayed suggested orientation has a minimum support volume among a pool of candidate orientations.7. The method of claim 1 , wherein the data processing system computes a bounding box of the solid model and uses an axis of the bounding box with a least length as the suggested orientation that minimizes build height.8. A data processing system comprising:a processor; andan accessible memory, the data processing system particularly configured to receive a solid model;analyze the solid model to determine a suggested orientation that minimizes a build height or minimizes a support volume; anddisplay and save the suggested orientation.9. The data processing system of claim 8 ...

Подробнее
26-03-2015 дата публикации

SYSTEM AND METHOD FOR AUTOMATIC DETECTION AND REGISTRATION OF MEDICAL IMAGES

Номер: US20150087965A1
Принадлежит:

A system and method for automatic registration of medical images includes accessing image data of a subject and plurality of elongated fiducial markers arranged in an asymmetrical orientation and analyzing the image data to detect the elongated fiducial markers by applying a line filter to treat the elongated fiducial markers as lines within the image data. The system and method also includes matching the elongated fiducial markers within the image data to a model of the elongated fiducial markers, registering the image data with a coordinate system based on the matching, and generating a report indicating at least the registered image data. 1. A system comprising: access image data of a subject and plurality of elongated fiducial markers arranged in an asymmetrical orientation;', 'analyze the image data to detect the elongated fiducial markers by applying a line filter to treat the elongated fiducial markers as lines within the image data;', 'enhance a contrast of the elongated fiducial markers within the image data;', 'match the enhanced contrast of the elongated fiducial markers within the image data to a model of the elongated fiducial markers;', 'register the image data with a coordinate system based on the matching of the enhanced contrast of the elongated fiducial markers to the model of the elongated fiducial markers; and', 'generate a report indicating at least the registered image data., 'a computer system including a non-transitive, computer-readable storage medium having stored thereon a program that causes the computer system to2. The system of wherein the line filter includes a multi-scale line filter.3. The system of wherein the image data includes data from at least three asymmetrically oriented elongated fiducial markers.4. The system of wherein the image data includes three-dimensional (3D) image data.5. The system of wherein the computer system is further caused to iteratively determine a correspondence of the elongated fiducial markers to the ...

Подробнее
24-03-2016 дата публикации

FACE POSE RECTIFICATION METHOD AND APPARATUS

Номер: US20160086017A1
Принадлежит:

A pose rectification method for rectifying a pose in data representing face images, comprising the steps of: 1. A pose rectification method for rectifying a pose in data representing face images , comprising the steps of:A-acquiring a least one test frame including 2D near infrared image data, 2D visible light image data, and a depth map;C-estimating the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation;D-mapping at least one of said 2D image on the depth map, so as to generate textured image data;E-projecting the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.2. The method of claim 1 , comprising a step of temporal and/or spatial smoothing of points in said depth map.3. The method of claim 1 , said step of estimating the pose including a step of performing a rough pose estimation claim 1 , for example based on random forest claim 1 , and a further step of determining a more precise estimation of the pose.4. The method of claim 1 , said step of aligning said depth map with a 3D model of a head of known orientation using an Iterative Closest Points (ICP) method.5. The method of claim 1 , further including a step of basic face detection before said estimation of the pose claim 1 , in order to eliminate at least some portions of said 2D near infrared image data claim 1 , and/or of said 2D visible light image data claim 1 , and/or of said depth map which do not belong to the face.6. The method of claim 1 , wherein said 3D model is user-independent.7. The method of claim 1 , wherein said 3D model is user-dependent.8. The method of claim 1 , wherein said 3D model is warped to adapt it to the user.9. The method of claim 1 , wherein said step of aligning said depth map with an existing 3D model of a head comprises warping said 3D model.10. The method of claim 1 , further comprising a step of correcting the illumination of portions of said 2D visible light image data ...

Подробнее
24-03-2016 дата публикации

Pose tracker with multi threaded architecture

Номер: US20160086025A1

Tracking pose of an articulated entity from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a plurality of threads execute on a parallel computing unit, each thread processing data from an individual frame of a plurality of frames of image data captured by an image capture device. In examples, each thread is computing an iterative optimization process whereby a pool of partially optimized candidate poses is being updated. In examples, one or more candidate poses from an individual thread are sent to one or more of the other threads and used to replace or add to candidate poses at the receiving thread(s).

Подробнее
24-03-2016 дата публикации

LANDMARK BASED POSITIONING

Номер: US20160086332A1
Автор: Chao Hui, Chen Jiajian
Принадлежит:

Disclosed are devices, methods and storage media for use in determining position information for imaging devices or mobile devices. In some implementations, a landmark is identified in an image which is obtained from an imaging device which in turn is positioned at a location and in a pose. A virtual two-dimensional image that would be visible from the landmark is determined based, at least in part, on the pose. The location or position information is based, at least in part, on the virtual two-dimensional image. 1. A method for determining a location comprising:identifying a first landmark in a first image obtained from an imaging device positioned at a first location, wherein the imaging device is in a first pose during a time that the first image is obtained;determining, using a processor, a first virtual two-dimensional image of a first view from a first viewpoint at the first landmark based, at least in part, on the first pose;estimating the first location based, at least in part, on the first virtual two-dimensional image;identifying a second landmark in a second image obtained from the imaging device positioned at the first location, wherein the imaging device is in a second pose during a time that the second image is obtained, and wherein the second pose is different from the first pose; anddetermining a second virtual two-dimensional image of a second view from a second viewpoint at the second landmark based, at least in part, on the second pose,wherein the estimating of the first location includes estimating the first location based, at least in part, on a matching of portions of the first virtual two-dimensional image with portions of the second virtual two-dimensional image.2. The method of claim 1 , wherein the estimating of the first location is based claim 1 , at least in part claim 1 , on a selection of portions of the first virtual two-dimensional image.3. The method of claim 1 , wherein the determining of the first virtual two-dimensional image is ...

Подробнее
24-03-2016 дата публикации

METHOD OF PROVIDING CARTOGRAIC INFORMATION OF AN ELETRICAL COMPONENT IN A POWER NETWORK

Номер: US20160086339A1
Принадлежит: ABB TECHNOLOGY AG

A method is disclosed for providing cartographic information of an electrical component in a power network. The cartographic information may include geographic coordinates and a type of the electrical component. The method can include obtaining a visual representation of the electrical component, wherein the visual representation contains location information about where the visual representation was made; determining the geographic coordinates of the electrical component based on the location information of the visual representation; and identifying the type of the electrical component by matching the visual representation with the predefined models relating to available kind of electrical components. 1. A method for providing cartographic information of an electrical component in a power network , wherein the cartographic information includes geographic coordinates and a type of the electrical component , the method comprising:obtaining a visual representation of the electrical component, wherein the visual representation contains location information about where the visual representation was made;determining the geographic coordinates of the electrical component based on the location information of the visual representation; andidentifying the type of the electrical component by matching the visual representation with predefined models relating to available kinds of electrical components.2. The method according to claim 1 , comprising:providing a symbol indicative of the type of the electrical component into a map at a map location corresponding to the geographic coordinates of the electrical component.3. The method according to claim 1 , comprising:obtaining the visual representation of the electrical component by an extraction from available visual material relating to the electrical component.4. The method according to claim 1 , comprising:obtaining the visual representation by taking a picture or video with a camera.5. The method according to claim 1 , ...

Подробнее
24-03-2016 дата публикации

APPARATUSES, METHODS AND SYSTEMS FOR RECOVERING A 3-DIMENSIONAL SKELETAL MODEL OF THE HUMAN BODY

Номер: US20160086350A1

The ARS offers tracking, estimation of position, orientation and full articulation of the human body from marker-less visual observations obtained by a camera, for example an RGBD camera. An ARS may provide hypotheses of the 3D configuration of body parts or the entire body from a single depth frame. The ARS may also propagates estimations of the 3D configuration of body parts and the body by mapping or comparing data from the previous frame and the current frame. The ARS may further compare the estimations and the hypotheses to provide a solution for the current frame. An ARS may select, merge, refine, and/or otherwise combine data from the estimations and the hypotheses to provide a final estimation corresponding to the 3D skeletal data and may apply the final estimation data to capture parameters associated with a moving or still body. 1. A processor-implemented method for markerless estimation of a 3D skeletal model of a human body , the method comprising:(a) receiving a current RGBD frame depicting at least a portion of a human body;(b) receiving an estimation of the position of the depicted at least one portion of the human body that was estimated based on a previous RGBD frame;(c) determining at least one hypothesis of a position of the depicted at least one portion of the human body from the current RGBD frame;(d) comparing the current RGBD frame to the estimation of the position of the depicted at least one portion of the human body that was estimated based on a previous RGBD frame; and(e) estimating a current position of the depicted at least one portion of the human body based on the at least one hypothesis from (c) and a result of the comparison in (d).2. The method of claim 1 , wherein:at least two hypotheses of the position of the depicted at least one portion of the human body are determined from the current RGBD frame at (c); andstep (e) includes determining whether to accept one of the at least two hypotheses, refine one of the at least two ...

Подробнее
23-03-2017 дата публикации

APPARATUS AND METHOD FOR ADJUSTING BRIGHTNESS OF IMAGE

Номер: US20170084068A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method of adjusting a brightness of an image includes matching an object model to an object based on one or more feature points of the object extracted from an input image including the object; mapping a surface normal map in a two-dimensional (D) image form to the input image based on the matched object model; and generating shadow information for the input image based on the mapped surface normal map and a virtual light source. 1. A method of adjusting a brightness of an image , the method comprising:matching an object model to an object based on one or more feature points of the object extracted from an input image comprising the object;mapping a surface normal map to the input image based on the matched object model; andgenerating shadow information for the input image based on the mapped surface normal map and a virtual light source.2. The method of claim 1 , wherein the mapping of the surface normal map to the input image comprises generating the surface normal map in a two-dimensional (2D) image form by interpolating normal vectors at points at which the feature points of the matched object model are located.3. The method of claim 1 , wherein the mapping of the surface normal map to the input image comprises generating the surface normal map in a two-dimensional (2D) image form by transforming a surface normal model prestored in a database in association with the object model into the surface normal map.4. The method of claim 1 , wherein the matching of the object model to the object comprises determining a transformation function to transform coordinates of feature points of the object model to coordinates of the feature points of the object.5. The method of claim 4 , wherein the mapping of the surface normal map to the input image comprises generating a transformed normal map in a two-dimensional (2D) image form by transforming each coordinate of a surface normal model using the determined transformation function.6. The method of claim 1 , wherein the ...

Подробнее
31-03-2016 дата публикации

MULTI-SPECTRAL IMAGE LABELING WITH RADIOMETRIC ATTRIBUTE VECTORS OF IMAGE SPACE REPRESENTATION COMPONENTS

Номер: US20160093056A1
Автор: Ouzounis Georgios
Принадлежит:

Automatic characterization or categorization of portions of an input multispectral image based on a selected reference multispectral image. Sets (e.g., vectors) of radiometric descriptors of pixels of each component of a hierarchical representation of the input multispectral image can be collectively manipulated to obtain a set of radiometric descriptors for the component. Each component can be labeled as a (e.g., relatively) positive or negative instance of at least one reference multispectral image (e.g., mining materials, crops, etc.) through a comparison of the set of radiometric descriptors of the component and a set of radiometric descriptors for the reference multispectral image. Pixels may be labeled (e.g., via color, pattern, etc.) as positive or negative instances of the land use or type of the reference multispectral image in a resultant image based on components within which the pixels are found. 1. A method for use in classifying areas of interest in overhead imagery , comprising:organizing, using a processor, a plurality of pixels of at least one input multispectral image of a geographic area into a plurality of components of a hierarchical image representation;deriving, using the processor, at least one set of radiometric descriptors for each component of the plurality of components;obtaining at least one set of radiometric descriptors for a reference multispectral image, wherein pixels of the reference multispectral image identify at least one land use or land type;determining, using the processor, for each component of the hierarchical image representation structure, a similarity metric between the set of radiometric image descriptors for the component and the set of radiometric image descriptors of the reference multispectral image, wherein the determined similarity metrics indicate a degree to which the pixels of each component identify the at least one land use or land type of the reference multispectral image.2. The method of claim 1 , further ...

Подробнее
30-03-2017 дата публикации

THREE-DIMENSIONAL SHAPING SYSTEM, AND INFORMATION PROCESSING DEVICE AND METHOD

Номер: US20170092005A1
Автор: HASEGAWA Yu
Принадлежит: FUJIFILM Corporation

A three-dimensional shaping system () includes a device () which acquires three-dimensional data, a device () which generates shaping target object data from the three-dimensional data, a device () which generates three-dimensional shaping data by adding, to the shaping target object data, attachment part data representing a three-dimensional shape of a marker attachment part () for attaching a marker () to a shaped object () shaped based on the shaping target object data, a device () which shapes and outputs the shaped object on the basis of the three-dimensional shaping data, an imaging device () which images the shaped object () in a state of the marker () being attached, and a device () which recognizes the marker from a captured image to calculate a camera parameter. 1. A three-dimensional shaping system comprising:a three-dimensional data acquiring device which acquires three-dimensional data representing a three-dimensional structure object;a shaping target object data generating device which generates shaping target object data representing a structure object as a shaping target from the three-dimensional data;a three-dimensional shaping data generating device which generates three- dimensional shaping data by adding, to the shaping target object data, attachment part data representing a three-dimensional shape of a marker attachment part for attaching a positioning marker to a shaped object shaped based on the shaping target object data;a three-dimensional shaping and outputting device which shapes and outputs the shaped object having the marker attachment part on the basis of the three-dimensional shaping data;an imaging device which images the shaped object in a state where the marker is attached to the marker attachment part of the shaped object; anda camera parameter calculating device which calculates a camera parameter including information representing relative positional relation between the imaging device and the shaped object by recognizing the ...

Подробнее
09-04-2015 дата публикации

INTEGRATED TRACKING WITH FIDUCIAL-BASED MODELING

Номер: US20150098636A1
Принадлежит:

Disclosed are various embodiments for determining a pose of a mobile device by analyzing a digital image captured by at least one imaging device to identify a plurality of regions in a fiducial marker indicative of a pose of the mobile device. A fiducial marker may comprise a circle-of-dots pattern, the circle-of-dots pattern comprising an arrangement of dots of varied sizes. The pose of the mobile device may be used to generate a three-dimensional reconstruction of an item subject to a scan via the mobile device. 1. A system , comprising:a mobile computing device capable of data communication with at least one imaging device configured to conduct a scan of an object; and analyzes a digital image captured via the at least one imaging device, the digital image comprising pixel data corresponding to at least a portion of a fiducial marker to identify a plurality of regions in the fiducial marker;', 'converts the plurality of regions to an identifier indicative of a pose of the mobile computing device; and', 'approximates a pose of the mobile computing device in a three-dimensional space using at least the identifier indicative of the pose of the mobile computing device., 'a pose estimate application executable in the mobile computing device, the pose estimate application comprising logic that2. The system of claim 1 , wherein the pose estimate application further comprises logic that refines the pose of the mobile computing device by determining parameters of the mobile computing device using at least one camera model incorporating the digital image.3. The system of claim 2 , wherein the at least one camera model further comprises a lens distortion model accounting for distortion in the digital image produced by a lens of the imaging device.4. The system of claim 1 , wherein the fiducial marker further comprises a circle-of-dots pattern.5. The system of claim 4 , wherein the circle-of-dots pattern further comprises at least a first circle-of-dots pattern and a second ...

Подробнее
07-04-2016 дата публикации

IMAGING SURFACE MODELING FOR CAMERA MODELING AND VIRTUAL VIEW SYNTHESIS

Номер: US20160098815A1
Принадлежит:

A method of displaying a captured image on a display device. A real image is captured by an image capture device. The image capture device uses a field-of-view lens that distorts the real image. A camera model is applied to the captured real image. The camera model maps objects in the captured real image to an image sensor plane of the image capture device to generate a virtual image. The image sensor plane is reconfigurable to virtually alter a shape of the image sensor plane to a non-planar surface. The virtual image formed on the non-planar image surface of the image sensor is projected to the display device. 1. A method of displaying a captured image on a display device comprising the steps of:capturing a real image by an image capture device, the image capture device using a field-of-view lens that distorts the real image;applying a camera model to the captured real image, the camera model mapping objects in the captured real image to an image sensor plane of the image capture device to generate a virtual image, the image sensor plane being reconfigurable to virtually alter a shape of the image sensor plane to a non-planar surface;projecting the virtual image formed on the non-planar image surface of the image sensor to the display device.2. The method of wherein applying a camera model to the captured real image includes applying the camera model without radial distortion correction.3. The method of wherein generating the virtual image comprises the steps of:providing a pre-calibrated real camera model by the processor, the real camera model representative of the vision-based imaging device capturing the scene;determining real incident ray angles of each pixel in the captured image based on the pre-calibrated real camera model;identifying an arbitrary shape of the non-planar imaging surfaceidentifying a pose of the virtual camera model;determining virtual incident ray angles of each pixel in the virtual image based on the virtual image model and the non-planar ...

Подробнее
06-04-2017 дата публикации

MEASURING DEVICE, MEASURING METHOD, AND PROGRAMS THEREFOR

Номер: US20170097420A1
Принадлежит: TOPCON CORPORATION

A technique for efficiently performing operations for identifying a current position in a method of measuring electromagnetic waves is provided. A measuring device includes a measurement planned position data receiving unit a current position data receiving unit and a GUI controlling unit The measurement planned position data receiving unit receives data of measurement planned positions at each of which electromagnetic waves are measured. The current position data receiving unit receives data of a current position of an electromagnetic wave measuring device. The GUI controlling unit controls displaying of a relationship between the current position of the electromagnetic wave measuring device and the measurement planned position on a display based on data of the measurement planned positions and data of the current position. 1. A measuring device comprising:a controlling unit configured to control displaying of a relationship between a measurement planned position for an electromagnetic wave and a current position of an electromagnetic wave measuring device, on a display, based on first data relating to the measurement planned position for the electromagnetic wave and based on second data relating to the current position of the electromagnetic wave measuring device.2. The measuring device according to claim 1 , wherein the controlling unit controls displaying of a direction and a distance to the measurement planned position.3. The measuring device according to claim 1 , further comprising:a notification controlling unit configured to control displaying of a notice when a distance between the measurement planned position and the current position is not greater than a predetermined value.4. The measuring device according to claim 1 , further comprising:a point cloud position data obtaining unit configured to obtain point cloud position data that is measured by a position measuring device, the position measuring device being configured to measure a position of the ...

Подробнее
12-04-2018 дата публикации

Surface Based Hole Target for use with Systems and Methods for Determining a Position and a Vector of a Hole formed in a Workpiece

Номер: US20180101160A1
Принадлежит:

In examples, systems for determining a position and a vector of a hole formed in a workpiece based on scanned data of the workpiece are described. The system includes a target for coupling to the hole formed in the workpieced, a scanner for projecting a light pattern onto the target and surrounding workpiece and for generating a plurality of data points representative of a surface area of a cylinder body of the target, and a processor for receiving the plurality of data points generated by the scanner and generating a three-dimensional (3D) model of at least a portion of the workpiece. The processor determines a position and a vector of the hole formed in the workpiece for the 3D model based on the plurality of data points representative of the surface area of the cylinder body. 1. A system for determining a position and a vector of a hole formed in a workpiece based on scanned data of the workpiece , the system comprising:a target for coupling to the hole formed in the workpiece, the target comprising a cylinder body extending from a shaft, wherein the shaft couples to the hole and the cylinder body extends from the hole such that a centerline of the hole is collinear with a longitudinal axis of the cylinder body;scanner for projecting a light pattern onto the target and surrounding workpiece and for generating a plurality of data points representative of a surface area of the cylinder body; anda processor for receiving the plurality of data points representative of the surface area of the cylinder body generated by the scanner and generating a three-dimensional (3D) model of at least a portion of the workpiece, the processor determining a position of the hole and a vector of the hole formed in the workpiece for the 3D model based on the plurality of data points representative of the surface area of the cylinder body.2. The system of claim 1 , wherein the cylinder body includes a cavity.3. The system of claim 1 , wherein the cylinder body is a full cylinder body ...

Подробнее
27-04-2017 дата публикации

OPTIMIZED CAMERA POSE ESTIMATION SYSTEM

Номер: US20170116735A1
Автор: Aughey John H.
Принадлежит:

A camera pose estimation system is provided for estimating the position of a camera within an environment. The system may be configured to receive a 2D image captured by a camera within the environment, and interpret metadata of the 2D image to identify an estimated position of the camera. The 2D image may be registered within a 3D model of the environment, and more particularly, registered within the image plane of a synthetic camera within the model at the estimated position. A 3D point within the 3D model that has a corresponding 2D point on the 2D image may be identified. The synthetic camera and thereby the image plane and 2D image may be repositioned to a new position at which a projection line from the synthetic camera and through the corresponding 2D point intersects the corresponding 3D point, the new position being a refined position of the camera. 1. An apparatus for estimating a position of a camera within an environment , the apparatus comprising a processor and a memory storing executable instructions that , in response to execution by the processor , cause the apparatus to implement at least:an imaging engine configured to receive a two-dimensional (2D) image captured by a camera within an environment, the 2D image having corresponding metadata with structured information indicating an estimated position of the camera within the environment;a registration engine configured to interpret the metadata to identify the estimated position of the camera, and register the 2D image within a three-dimensional (3D) model of the environment based thereon, the 2D image being rendered in an image plane of a synthetic camera within the 3D model at the estimated position; andan estimation engine configured to identify a 2D point on the 2D image and a corresponding 3D point in the 3D model, and reposition the synthetic camera and thereby the image plane and 2D image to a new position of the synthetic camera at which a projection line from the synthetic camera and ...

Подробнее
27-04-2017 дата публикации

System and Method For Dynamic Device Tracking Using Medical Imaging Systems

Номер: US20170116751A1
Принадлежит:

A system and method are provided for generating images that track a position and shape of a medical device within a subject. The method includes acquiring image data from a subject along at least two disparate view angles, each view angle including a deformable medical device arranged in the subject. The method also includes receiving images reconstructed from the image data and exploring a search space to compare the images with a dynamic three-dimensional (3D) model at least using a deformation parameter to determine a position and shape of the deformable medical device within the subject. The method further includes displaying an image of the subject and deformable medical device arranged within the subject based on the position and shape of the deformable medical device within the subject. 1. A method for generating images that track a position and shape of a deformable medical device within a subject , the method comprising:(i) receiving image data acquired from the subject along at least two disparate view angles, each view angle including a deformable medical device arranged in the subject;(ii) accessing a three-dimensional (3D) model including the deformable medical device that includes a deformation parameter for the deformable medical device;(iii) exploring a search space including the deformation parameter to match the image data with the 3D model within a predetermined tolerance to determine a position and shape of the deformable medical device; and(iv) using the image data and the position and shape of the deformable medical device determined in (iii), displaying an image of the deformable medical device arranged within the subject.2. The method of wherein the search space further includes at least two of a position claim 1 , pitch claim 1 , yaw claim 1 , roll claim 1 , proximal diameter claim 1 , and distal diameter of the device.3. The method of wherein (iii) includes limiting the search space based on a priori knowledge of the deformable medical ...

Подробнее
07-05-2015 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20150125034A1
Автор: Tateno Keisuke
Принадлежит:

To calculate the position and orientation of a target object with high accuracy, an information processing apparatus converts an image feature on a two-dimensional image into a corresponding position in a three-dimensional space, acquires a first registration error between the converted image feature and a geometric feature of a model, acquires a second registration error between a distance point and the geometric feature of the model, and then derives the position and orientation of the target object based on the acquired first registration error and the acquired second registration error. 1. An information processing apparatus comprising:a storage unit configured to store a model representing a shape of a target object;an approximate position and orientation acquisition unit configured to acquire an approximate position and orientation of the target object;an acquisition unit configured to acquire a two-dimensional image of the target object and information about a three-dimensional point group on a surface of the target object;a detection unit configured to detect an image feature from the acquired two-dimensional image;an association unit configured to associate, based on the approximate position and orientation, the detected image feature with a geometric feature included in the model and to associate a distance point of the three-dimensional point group with the geometric feature included in the model;a first registration error acquisition unit configured to convert the image feature on the two-dimensional image into a three-dimensional space and acquire a first registration error between the converted image feature and the geometric feature of the model;a second registration error acquisition unit configured to acquire a second registration error between the associated distance point and the geometric feature of the model; anda position and orientation derivation unit configured to derive a position and orientation of the target object based on the acquired ...

Подробнее
07-05-2015 дата публикации

Image processing apparatus, image processing method, and storage medium for position and orientation measurement of a measurement target object

Номер: US20150125035A1
Принадлежит: Canon Inc

To perform robust position and orientation measurement even in a situation where noise exist, an image including a target object is obtained, an approximate position and orientation of the target object included in the obtained image are obtained, information related to a shadow region of the target object in the obtained image is estimated, the approximate position and orientation are corrected on the basis of the estimated information related to the shadow region, and a position and orientation of the target object in the image are derived on the basis of the corrected approximate position and orientation and held model information.

Подробнее
05-05-2016 дата публикации

Three Dimensional Recognition from Unscripted Sources Technology (TRUST)

Номер: US20160125609A1
Принадлежит:

The invention is a device and method for recognizing individuals of interest by analyzing images taken under real world lighting conditions with imperfect viewing. Recognition attributes are identified by running a plurality of processing algorithms on the image data which a) extract indices of recognition that are markers relating to specific individuals, b) create morphable, three dimensional computer graphics models of candidate individuals based on the indices of recognition, c) apply the viewing conditions from the real world data imagery to the three dimensional models, and d) declare recognition based on a high degree of correlation between the morphed model and the raw data image within a catalog of the indices of recognition of individuals of interest. The invention further encompasses the instantiation of the processing on very high thruput processing elements that may include FPGAs or GPUs. 1. An image processing appliance comprising a family of image processing functions instantiated on high thru put processor hardwares that accomplishes recognition of individuals of interest from analysis of images taken under real world lighting conditions , wherein the analysis functions accomplish a) extraction of indices of recognition from real work imagery sets containing the images of the individuals of interest , b) the construction of three dimensional morphable models of the individuals of interest based on data sets containing images of the individuals of interest , c) extraction of lighting conditions from the real world image data set containing images of the individual to be recognized , d) imposition of the extracted lighting conditions upon the three dimensional images of candidate individuals , and e) declaration of individual identify based on a high degree of correlation between the real world data set and the simulated data set extracted from the morphed three dimensional models with the lighting conditions of the real image rendered onto the three ...

Подробнее
04-05-2017 дата публикации

LOCATING A FEATURE FOR ROBOTIC GUIDANCE

Номер: US20170124714A1
Принадлежит:

Aspects herein use a feature detection system to visually identify a feature on a component. The feature detection system includes at least two cameras that capture images of the feature from different angles or perspectives. From these images, the system generates a 3D point cloud of the components in the images. Instead of projecting the boundaries of features onto the point cloud directly, the aspects herein identify predefined geometric shapes in the 3D point cloud. The system then projects pixel locations of the feature's boundaries onto the identified geometric shapes in the point cloud. Doing so yields the 3D coordinates of the feature which then can be used by a robot to perform a manufacturing process. 1. A feature detection system comprising:a first electromagnetic sensor;a second electromagnetic sensor arranged in a fixed spatial relationship with the first electromagnetic sensor;a processor; generate a 3D point cloud based on a data captured by at least one of first and second electromagnetic sensors;', 'identify at least one predefined geometric shape in the 3D point cloud;', 'identify boundary pixels based on a data captured by at least one of the first and second electromagnetic sensors, wherein the boundary pixels correspond to an edge of a feature; and', 'project locations of the boundary pixels onto the identified geometric shape to identify 3D locations of the feature corresponding to the boundary pixels., 'a memory storing detection logic, which when executed by the processor is operable to2. The feature detection system of claim 1 , wherein identifying the predefined geometric shape in the 3D point cloud comprises:identifying multiple instances of the predefined geometric shape in the 3D point cloud;matching the instances of the predefined geometric shape to model shapes in a design model of components within at least one viewing region of the first and second electromagnetic sensors; andselecting the identified geometric shape by identifying an ...

Подробнее
12-05-2016 дата публикации

Real-time functional-mri connectivity analysis

Номер: US20160133017A1
Автор: Jingyun Chen
Принадлежит: International Business Machines Corp

A method and associated systems for real-time subject-driven functional connectivity analysis. One or more processors receive an fMRI time series of sequentially recorded, masked, parcellated images that each represent the state of a subject's brain at the image's recording time as voxels partitioned into a constant set of three-dimensional regions of interest. The processors derive an average intensity of each region's voxels in each image and organize these intensity values into a set of time courses, where each time course contains a chronologically ordered list of average intensity values of one region. The processors then identify time-based correlations between average intensities of each pair of regions and represent these correlations in a graphical format. As each subsequent fMRI image of the same subject's brain arrives, the processors repeat this process to update the time courses, correlations, and graphical representation in real time or near-real time.

Подробнее
21-05-2015 дата публикации

METHOD AND SYSTEM OF IDENTIFYING NON-DISTINCTIVE IMAGES/OBJECTS IN A DIGITAL VIDEO AND TRACKING SUCH IMAGES/OBJECTS USING TEMPORAL AND SPATIAL QUEUES

Номер: US20150139488A1
Принадлежит:

A method and system to identify and locate images/objects which may be characterized as non-distinctive or “feature-less” and which would be difficult to locate by conventional means comprises a plurality of steps including identifying first and second frame markers, increasing granularity between the frame markers, identifying at least one dominant object between the frame markers, normalizing its shape and identifying its edges, dissecting the dominant object into at least two equally sized sections; identifying shape and characteristics of at least one section (the analyzed section) of the dominant object thereby creating section data; applying geometric modeling such that section data from the analyzed section is used to determine overall shape, facets and configuration of the dominant object, thereby forming a geometric model; comparing geometric model to a known reference data-base of objects like the non-distinctive object (the reference object); and assessing the probability that the geometric model so formed represents the desired non-distinctive object. 1. A method of identifying the position of a desired non-distinctive object in a digital video which comprises:a) analyzing the video, by intermittent frames, to identify a physical scene or content change and wherein such change occurs between a first frame marker and a second frame marker, said frame markers not necessarily being directly sequential;b) identifying an item within one frame marker or between the first frame marker and the second frame marker which exhibits identifiable stability as a framed item;c) increasing granularity of analysis of the frame content between the first frame and the second frame, comprising the framed item, by removing one or both of i) extraneous sections of the frame content; and ii) frame perimeters, thereby forming a targeted frame content;d) identifying one or more dominant features within the targeted frame content using surface analysis, thereby identifying one or ...

Подробнее
21-05-2015 дата публикации

CAMARA TRACKING APPARATUS AND METHOD USING RECONSTRUCTION SEGMENTS AND VOLUMETRIC SURFACE

Номер: US20150139532A1
Принадлежит:

Provided are an apparatus and method for tracking a camera that reconstructs a real environment in three dimensions by using reconstruction segments and a volumetric surface. The camera tracking apparatus using reconstruction segments and a volumetric surface includes a reconstruction segment division unit configured to divide three-dimensional space reconstruction segments extracted from an image acquired by a camera, a transformation matrix generation unit configured to generate a transformation matrix for at least one reconstruction segment among the reconstruction segments obtained by the reconstruction segment division unit, and a reconstruction segment connection unit configured to rotate or move the at least one reconstruction segment according to the transformation matrix generated by the reconstruction segment division unit and connect the rotated and moved reconstruction segment with another reconstruction segment. 1. A camera tracking apparatus using reconstruction segments and a volumetric surface , the camera tracking apparatus comprising:a reconstruction segment division unit configured to extract three-dimensional space reconstruction segments from an image acquired by a camera and divide the reconstruction segments;a transformation matrix generation unit configured to generate a transformation matrix for a first reconstruction segment having a distortion among the reconstruction segments obtained by the reconstruction segment division unit; anda reconstruction segment connection unit configured to rotate or move the first reconstruction segment having a distortion according to the transformation matrix generated by the transformation matrix generation unit, and connect the rotated or moved first reconstruction segment with a second reconstruction segment having no distortion.2. The camera tracking apparatus of claim 1 , wherein the reconstruction segment division unit divides the reconstruction segments at a timing in which a movement factor of the ...

Подробнее
19-05-2016 дата публикации

Image analyzing device, image analyzing method, and computer program product

Номер: US20160140758A1
Принадлежит: Toshiba Corp

According to an embodiment, an image analyzing device includes a first acquirer, a constructor, a first calculator, a second calculator, and a third calculator. The first acquirer is configured to acquire image information on a joint of a subject and bones connected to the joint. The constructor is configured to construct a three-dimensional shape of the bones and the joint, and relation characteristics between a load and deformation in the bones and the joint from the image information. The first calculator is configured to calculate a positional relation between the bones connected to the joint. The second calculator is configured to calculate acting force a muscle acting on the bones connected to the joint based on the positional relation. The third calculator is configured to calculate first stress acting on the joint based on the three-dimensional shape, the relation characteristics, and the acting force.

Подробнее
07-08-2014 дата публикации

POSITION AND ORIENTATION MEASURING APPARATUS, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

Номер: US20140219502A1
Принадлежит: CANON KABUSHIKI KAISHA

There is provided a position and orientation measurement apparatus, information processing apparatus, and an information processing method, capable of performing robust measurement of a position and orientation. In order to achieve the apparatuses and method, at least one coarse position and orientation of a target object is acquired from an image including the target object, at least one candidate position and orientation is newly generated as an initial value used for deriving a position and orientation of the target object based on the acquired coarse position and orientation, and the position and orientation of the target object in the image is derived by using model information of the target object and by performing at least once of fitting processing of the candidate position and orientation generated as the initial value with the target object in the image. 1. A position and orientation measurement apparatus comprising:an acquisition unit configured to acquire, from an image including a target object, at least one coarse position and orientation of the target object;a generation unit configured to generate, based on the acquired coarse position and orientation, at least one candidate position and orientation which is different from the coarse position and orientation as an initial position and orientation used for deriving a position and orientation of the target object; anda deriving unit configured to derive the position and orientation of the target object in the image by associating model information of the target object with the target object in the image based on the candidate position and orientation generated as the initial position and orientation.2. The position and orientation measurement apparatus according to claim 1 , wherein the acquisition unit acquires the at least one coarse position and orientation of the target object in the image by performing pattern matching with respect to the image.3. The position and orientation measurement apparatus ...

Подробнее
18-05-2017 дата публикации

Method for Identifying a Target Object in a Video File

Номер: US20170140541A1
Автор: Lu Yi-Chih
Принадлежит:

A method for identifying a target object in a video file is implemented using an identification system. In the method, the identification system is programmed to: obtain a video file and an image; obtain a target object in the image; construct an image model based on a feature of the target object; extract key frames from the video file sequentially; perform a comparing procedure for each key frame to determine whether the key frame includes a similar object corresponding to the image model; and for each key frame, extract from the key frame, when the determination is affirmative, a part of the key frame that contains the similar object to obtain a target image, presence of the target image indicating that the target object is identified in the video file. 1. A method for identifying a target object in a video the method being implemented using an identification system and comprising the steps of:a) obtaining a video file and an image;b) performing edge detection on the image so as to obtain a target object;c) detecting at least one feature of the target object and constructing an image model based on the at least one feature of the target object;d) extracting a plurality of key frames from the video file sequentially;e) performing a comparing procedure for each of the plurality of key frames to make a determination as to whether the key frame includes a similar object that corresponds to the image model; andf) for each of the plurality of key frames, extracting from the key frame, when a result of the determination made in step e) is affirmative, a part of the key frame that contains the similar object to obtain a target image, presence of the target image indicating that the target object is identified in the video file.2. The method of claim 1 ,wherein step d) includes, for each of the plurality of key frames, storing a time instance of the key frame associated with the video file; andwherein step f) includes obtaining a time point of the target image within the ...

Подробнее
04-06-2015 дата публикации

POINT-OF-GAZE DETECTION DEVICE, POINT-OF-GAZE DETECTING METHOD, PERSONAL PARAMETER CALCULATING DEVICE, PERSONAL PARAMETER CALCULATING METHOD, PROGRAM, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20150154758A1
Принадлежит: JAPAN SCIENCE AND TECHNOLOGY AGENCY

A point-of-gaze detection device according to the present invention detects a point-of-gaze of a subject toward a surrounding environment. The device includes: an eyeball image obtaining means configured to obtain an eyeball image of the subject; a reflection point estimating means configured to estimate a first reflection point, at which incoming light in an optical axis direction of an eyeball of the subject is reflected, from the eyeball image; a corrected reflection point calculating means configured to calculate a corrected reflection point as a corrected first reflection point by correcting the first reflection point on the basis of a personal parameter indicative of a difference between a gaze direction of the subject and the optical axis direction of the eyeball; and a point-of-gaze detecting means configured to detect the point-of-gaze on the basis of light at the corrected reflection point and light in the surrounding environment. 1. A point-of-gaze detection device to detect a point-of-gaze of a subject toward a surrounding environment , comprising:an eyeball image obtaining means configured to obtain an eyeball image of the subject;a reflection point estimating means configured to estimate a first reflection point, at which incoming light in an optical axis direction of an eyeball of the subject is reflected, from the eyeball image;a corrected reflection point calculating means configured to calculate a corrected reflection point as a corrected first reflection point by correcting the first reflection point on the basis of a personal parameter indicative of a difference between a gaze direction of the subject and the optical axis direction of the eyeball; anda point-of-gaze detecting means configured to detect the point-of-gaze on the basis of light at the corrected reflection point and light in the surrounding environment.2. The device of claim 1 , further comprising:a pose calculating means configured to calculate a pose of the eyeball from the eyeball ...

Подробнее
04-06-2015 дата публикации

CREATING A MESH FROM A SPARSE STRUCTURE-FROM-MOTION POINT CLOUD USING CO-VISIBILITY OF POINTS

Номер: US20150154795A1
Автор: Ogale Abhijit
Принадлежит: GOOGLE INC.

A method for creating a three-dimensional mesh model of a structure includes accessing a set of three-dimensional points associated with a set of images of the structure. For each three-dimensional point in the set of three-dimensional points, the method determines a reference image, identifies a subset of images from the set of images of the structure taken within a distance from the reference image, determines a subset of three-dimensional points seen by the subset of images, filters the subset of three-dimensional points to retain only a set of co-visible points that lie in a visibility cone of the reference image, and selects a normal using the set of co-visible points. The three-dimensional mesh model of the structure is computed using the selected normal, and the model may be provided to a second computing device. 1. A method for creating a three-dimensional mesh model of a structure , comprising:accessing a set of three-dimensional points associated with a set of images of the structure; determining, using a processor of a computing device, a reference image;', 'identifying, using the processor, a subset of images from the set of images of the structure, the subset taken within a distance from the reference image;', 'determining, using the processor, a subset of three-dimensional points seen by the subset of images;', 'filtering, using the processor, the subset of three-dimensional points to retain only a set of co-visible points that lie in a visibility cone of the reference image; and', 'selecting, using the processor, a normal using the set of co-visible points;, 'for each three-dimensional point in the set of three-dimensional pointscomputing, using the processor, the three-dimensional mesh model of the structure using the selected normals; andproviding the three-dimensional mesh model to a second computing device.2. The method of claim 1 , further comprising:creating, using the processor, two possible solutions for a surface normal using the set of co- ...

Подробнее
02-06-2016 дата публикации

System and method for product identification

Номер: US20160155011A1
Принадлежит: Xerox Corp

A system and method for object instance localization in an image are disclosed. In the method, keypoints are detected in a target image and candidate regions are detected by matching the detected keypoints to keypoints detected in a set of reference images. Similarity measures between global descriptors computed for the located candidate regions and global descriptors for the reference images are computed and labels are assigned to at least some of the candidate regions based on the computed similarity measures. Performing the region detection based on keypoint matching while performing the labeling based on global descriptors improves object instance detection.

Подробнее
02-06-2016 дата публикации

METHODS OF AND APPARATUSES FOR MODELING STRUCTURES OF CORONARY ARTERIES FROM THREE-DIMENSIONAL (3D) COMPUTED TOMOGRAPHY ANGIOGRAPHY (CTA) IMAGES

Номер: US20160155234A1
Принадлежит:

A method of modeling a structure of a coronary artery of a subject may include: forming a learning-based shape model of the structure of the artery, based on positions of landmarks acquired from three-dimensional images; receiving a target image; and/or modeling the artery structure included in the target image, using the model. An apparatus for modeling a structure of a coronary artery may include: a memory configured to store a learning-based shape model of the artery, the learning-based shape model being formed based on positions of a plurality of landmarks acquired from three-dimensional images, the plurality of the landmarks corresponding to the artery; a communication circuit configured to receive a target image; and/or a processing circuit configured to model the artery structure included in the target image, using the model. 1. A method of modeling a structure of a coronary artery of a subject , the method comprising:forming a learning-based shape model of the structure of the coronary artery, based on positions of a plurality of landmarks acquired from each of a plurality of three-dimensional (3D) images;receiving a target image; andmodeling the structure of the coronary artery included in the target image, using the learning-based shape model.2. The method of claim 1 , wherein the modeling of the structure comprises:acquiring positions of points representing the coronary artery from the target image, based on the learning-based shape model;acquiring a centerline of the coronary artery from the target image based on the positions of the points; andmodeling the structure of the coronary artery, using the positions of the points and the centerline.3. The method of claim 2 , wherein the acquiring of the positions of the points comprises:setting initial positions of the points, based on a mean shape of the learning-based shape model;changing the initial positions based on energy difference between the mean shape and a shape formed by the points; andacquiring ...

Подробнее
02-06-2016 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

Номер: US20160155235A1
Принадлежит:

There is provided with an image processing apparatus. A captured image of a target object that is captured by an image capturing apparatus is obtained. Information that indicates a deterioration degree of the captured image is obtained for a position in the captured image. A feature of the target object is extracted from the captured image based on the deterioration degree. The feature of the target object and a feature of the three-dimensional model observed when the three-dimensional model is arranged in accordance with a predetermined position and orientation are associated. A position and orientation of the target object with respect to the image capturing apparatus are derived by correcting the predetermined position and orientation based on a result of association. 1. An image processing apparatus comprising:an image obtaining unit configured to obtain a captured image of a target object that is captured by an image capturing apparatus;a deterioration degree obtaining unit configured to obtain information that indicates a deterioration degree of the captured image, for a position in the captured image;an extraction unit configured to extract a feature of the target object from the captured image based on the deterioration degree;a model holding unit configured to hold a three-dimensional model of the target object;an associating unit configured to associate the feature of the target object and a feature of the three-dimensional model observed when the three-dimensional model is arranged in accordance with a predetermined position and orientation; anda deriving unit configured to derive a position and orientation of the target object with respect to the image capturing apparatus by correcting the predetermined position and orientation based on a result of association.2. The image processing apparatus according to claim 1 , wherein the deterioration degree obtaining unit is further configured to obtain the deterioration degree from a deterioration degree holding ...

Подробнее
21-08-2014 дата публикации

OBJECT DETECTING METHOD AND OBJECT DETECTING DEVICE

Номер: US20140233807A1
Принадлежит:

In an object detecting method according to an embodiment, external reference points are set in external space of a model of an object and an internal reference point is set in internal space of the model. A table is stored in which feature quantities on a local surface of the model are associated with positions of the external reference points and the internal reference point. The feature quantity on the local surface of the model is calculated, and the position of the reference point whose feature quantity is identical to the calculated feature quantity is acquired from the table and is converted into a position in a real space. When the converted position is outside the object, the position is excluded from information for estimation and the position and the attitude of the object are estimated. 1. An object detecting method comprising:setting a plurality of external reference points used as information for estimating a position and an attitude of an object in external space of a model of the object, and setting an internal reference point used as information for determining whether the information for estimation is valid in internal space of the model;storing a table in which feature quantities on a local surface including a pair of a starting point and an endpoint that are sequentially selected from a point group located on a surface of the model are associated with a set of positions of the external reference points and the internal reference point with respect to the starting point;sequentially selecting a pair of a starting point and an endpoint from a sample point group located on a surface of the object existing in real space, and calculating feature quantities of the object on a local surface including the pair of the starting point and the endpoint; andacquiring, from the table, the set of positions associated with feature quantities matching the feature quantities of the object, transforming this set into a set of positions in the real space, and when ...

Подробнее
28-08-2014 дата публикации

Using a combination of 2d and 3d image data to determine hand features information

Номер: US20140241570A1
Принадлежит: Kaiser Foundation Hospitals Corp

A method of determining hand features information using both two dimensional (2D) image data and three dimensional (3D) image data is described. In one implementation, a method includes: receiving a 2D image frame; receiving 3D image data corresponding to the 2D image frame; using the 3D image data corresponding to the 2D image frame, transforming the 2D image frame; and using the 3D image data corresponding to the 2D image frame, scaling the 2D image frame, where the transforming and scaling results in a normalized 2D image frame, where the normalized 2D image frame is a scaled and transformed version of the 2D image frame, and where the scaling and transforming is performed using a computer.

Подробнее
07-06-2018 дата публикации

METHODS AND SYSTEMS FOR AUTOMATIC OBJECT DETECTION FROM AERIAL IMAGERY

Номер: US20180157911A1
Принадлежит: GEOSAT Aerospace & Technology

Methods and systems for detecting objects from aerial imagery are disclosed. According to certain embodiments, the method may include obtaining a Digital Surface Model (DSM) image of an area. The method may also include obtaining a DSM image of one or more target objects. The method may further include detecting the target object in the area based on the DSM images of the area and the one or more target objects. The method may further include recognizing the detected target objects by artificial intelligence. The method may further include acquiring the positions of the recognized target objects. The method may further include calculating the number of the recognized target objects. 1. A non-transitory computer-readable medium storing instructions which , when executed , cause one or more processors to perform operations for detecting objects from aerial imagery , the operations comprising:obtaining a Digital Surface Model (DSM) image of an area;obtaining a DSM image of one or more target objects; anddetecting the target object in the area based on the DSM images of the area and the one or more target objects.2. The non-transitory computer-readable medium of claim 1 ,wherein the DSM image of the one or more target objects are obtained based on the shape or the contrast of the one or more target objects, or based on a combination of the shape and the contrast thereof.3. The non-transitory computer-readable medium of claim 1 , wherein the operations further comprise:reducing the resolutions of the DSM images of the area and the one or more target objects before detecting the target object.4. The non-transitory computer-readable medium of claim 1 , identifying one or more target subareas on the DSM image of the area;', 'enhancing the contrast of the one or more target subareas; and, 'wherein obtaining the DSM image of the area further includes 'detecting the target object based on the enhanced DSM image of the area and the DSM image of the one or more target objects.', ...

Подробнее
08-06-2017 дата публикации

Method and apparatus for gesture recognition

Номер: US20170161903A1
Автор: Cevat Yerli
Принадлежит: Calay Venture SA RL

A computer-implemented method and an apparatus for improving gesture recognition are described. The method comprises providing a reference model defined by a joint structure, receiving at least one image of a user, and mapping the reference model to the at least one image of the user, thereby connecting the user to the reference model for recognition of a set of gestures predefined for the reference model, when the gestures are performed by the user.

Подробнее
18-06-2015 дата публикации

TOOL LOCALIZATION SYSTEM WITH IMAGE ENHANCEMENT AND METHOD OF OPERATION THEREOF

Номер: US20150170381A1
Принадлежит:

A tool localization system and method of operation thereof including: a camera for obtaining an image frame; and a processing unit connected to the camera, the processing unit including: a classification module for detecting a surgical tool in the image frame, a motion vector module, coupled to the classification module, for modeling motion of the surgical tool based on the image frame and at least one prior image frame, a mask generation module, coupled to the motion vector module, for generating a tool mask, based on the surgical tool detected and the motion of the surgical tool, for covering the surgical tool in the image frame, and an exposure module, coupled to the mask generation module, for processing the image frame without the areas covered by the tool mask for display on a display interface. 1. A method of operation of a tool localization system comprising:obtaining an image frame with a camera;detecting a surgical tool in the image frame;modeling motion of the surgical tool based on the image frame and at least one prior image frame;generating a tool mask, based on the surgical tool detected and the motion of the surgical tool, for covering the surgical tool in the image frame; andprocessing the image frame without the areas covered by the tool mask for display on a display interface.2. The method as claimed in wherein obtaining the image frame with the camera includes obtaining the image frame with the camera and a light source.3. The method as claimed in wherein detecting the surgical tool includes:segmenting the image frame;detecting boundaries in the image frame;generating a potential tool outline; andcorrelating a tool shape template with the potential tool outline.4. The method as claimed in further comprising providing a processing unit connected to the camera.5. The method as claimed in wherein processing the image frame includes processing the image frame for exposure measure.6. A method of operation of a tool localization system comprising: ...

Подробнее
23-06-2016 дата публикации

AUTOMATIC DETECTION OF FACE AND THEREBY LOCALIZE THE EYE REGION FOR IRIS RECOGNITION

Номер: US20160180147A1
Принадлежит:

An apparatus for automatic detection of the face in a given image and localization of the eye region which is a target for recognizing iris is provided. The apparatus includes an image capturing unit collecting an image of a user; and a control unit extracting a characteristic vector from the image of the user, fitting an extracted vector into a Pseudo 2D Hidden Markov Model (HMM), and an operating method thereof for detecting a face and facial features of the user. 1. An apparatus for recognizing iris of an eye , the apparatus comprising:an image capturing unit collecting an image of a user; anda control unit extracting a characteristic vector from the image of the user, fitting an extracted vector into a Pseudo 2D Hidden Markov Model (HMM) to train a face model and detect a face and facial features of a user.2. The apparatus according to claim 1 , wherein the control unit accepts a set of different instance of face samples and extracts observation vectors within a rectangular 4×4 window using eight coefficients over the lowest frequencies in the 2D DCT domain.3. The apparatus according to claim 1 , wherein the control unit divides the face from top to bottom vertically into five regions as forehead claim 1 , eye claim 1 , nose claim 1 , mouth and chin assigning each region to a HMM super state.4. The apparatus according to claim 3 , wherein the control unit further divides each super state into sub state to preserve horizontal structural information of a the face.5. The apparatus according to claim 4 , wherein the control unit determines the HMM to depict a face as a two-dimensional observation sequence such that the face is comprised of a global (vertical) model represented by 5 states—the forehead claim 4 , the eyes claim 4 , the nose claim 4 , the mouth and the chin; and local (horizontal) sub-models one for each facial feature with three states for the forehead and the chin claim 4 , the six states for the eyes claim 4 , nose and mouth.6. The apparatus ...

Подробнее
23-06-2016 дата публикации

LANDMARK BASED POSITIONING

Номер: US20160180538A1
Автор: Chao Hui, Chen Jiajian
Принадлежит:

Disclosed are devices, methods and storage media for use in determining position information for imaging devices or mobile devices. In some implementations, a landmark is identified in an image which is obtained from an imaging device which in turn is positioned at a location and in a pose. A virtual two-dimensional image that would be visible from the landmark is determined based, at least in part, on the pose. The location or position information is based, at least in part, on the virtual two-dimensional image. 1. A method for position estimation comprising:identifying a landmark in an image obtained from an imaging device positioned at a location, wherein the imaging device is in a pose during a time that the image is obtained;determining, using a processor, a virtual two-dimensional image of a view from a viewpoint at the landmark based, at least in part, on the pose; andestimating the location based, at least in part, on the virtual two-dimensional image.2. The method of claim 1 , wherein the estimating of the location is based claim 1 , at least in part claim 1 , on a selection of portions of the virtual two-dimensional image.3. The method of claim 1 , wherein the determining of the virtual two-dimensional image is based claim 1 , at least in part claim 1 , on a three-dimensional representation of a region having a plurality of graphic primitives claim 1 , wherein at least one of the plurality of graphic primitives comprises an attribute.4. The method of claim 1 , wherein the determining of the virtual two-dimensional image is based claim 1 , at least in part claim 1 , on a three-dimensional representation of a region having a plurality of graphic primitives claim 1 ,wherein at least one of the plurality of graphic primitives comprises an attribute,wherein at least a portion of the three-dimensional representation of the region is color coded, andwherein the attribute is based, at least in part, on a value corresponding to the color coding.5. The method of ...

Подробнее
22-06-2017 дата публикации

Virtual Sensor Data Generation For Wheel Stop Detection

Номер: US20170177954A1
Принадлежит: FORD GLOBAL TECHNOLOGIES, LLC

The disclosure relates to methods, systems, and apparatuses for virtual sensor data generation and more particularly relates to generation of virtual sensor data for training and testing models or algorithms to detect objects or obstacles, such as wheel stops or parking barriers. A method for generating virtual sensor data includes simulating a three-dimensional (3D) environment comprising one or more objects. The method includes generating virtual sensor data for a plurality of positions of one or more sensors within the 3D environment. The method includes determining virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth includes information about at least one object within the virtual sensor data. The method also includes storing and associating the virtual sensor data and the virtual ground truth. 1. A method comprising:simulating, using one or more processors, a three-dimensional (3D) environment comprising one or more parking barriers;generating, using one or more processors, virtual sensor data for a plurality of positions of one or more sensors within the 3D environment;determining, using one or more processors, virtual ground truth corresponding to each of the plurality of positions, wherein the ground truth comprises a height of the at least one of the parking barriers; andstoring and associating the virtual sensor data and the virtual ground truth using one or more processors.2. The method of claim 1 , further comprising providing one or more of the virtual sensor data and the virtual ground truth for training or testing of a machine learning algorithm or model.3. The method of claim 2 , wherein the machine learning model or algorithm comprises a neural network.4. The method of claim 2 , wherein training the machine learning algorithm or model comprises providing at least a portion of the virtual sensor data and corresponding virtual ground truth to train the machine learning algorithm or model to determine one ...

Подробнее
30-06-2016 дата публикации

ROBOT IDENTIFICATION SYSTEM

Номер: US20160184998A1
Автор: AMAGATA Yasuhiro
Принадлежит:

A robot identification system includes a robot having a rotatable arm, an imaging unit imaging the robot, an angle detector detecting a rotation angle of the arm, a model generator producing robot models representing the forms of the robot on the basis of the rotation angle detected by the angle detector, and an image identification unit that compares an image captured by the imaging unit with the robot models generated by the model generator to identify a robot image in the image. 1. A robot identification system comprising:a robot having a rotatable arm;an imaging unit imaging the robot;an angle detector detecting a rotation angle of the arm;a model generator that generates a robot model representing the form of the robot on the basis of the rotation angle detected by the angle detector; andan image identification unit comparing an image captured by the imaging unit with the robot model generated by the model generator and thereby identifying a robot image in the image.2. The robot identification system according to claim 1 , whereinthe model generator generates a plurality of robot models representing the forms of the robot viewed from a plurality of locations on the basis of the rotation angle detected by the angle detector; andthe image identification unit compares the image captured by the imaging unit with the plurality of robot models generated by the model generator, to identify the robot image in the image.3. The robot identification system according to claim 1 , whereinthe robot includes a first robot having a rotatable arm and a second robot having a rotatable arm;the angle detector includes a first angle detector detecting a rotation angle of the arm of the first robot, and a second angle detector detecting a rotation angle of the arm of the second robot;the model generator generates a first robot model representing the form of the first robot on the basis of the rotation angle of the arm of the first robot detected by the first angle detector, and a ...

Подробнее