Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 2458. Отображено 100.
21-03-2013 дата публикации

IMAGE DIAGNOSIS SUPPORT APPARATUS, METHOD AND PROGRAM

Номер: US20130070984A1
Принадлежит: FUJIFILM Corporation

At least one specified position and, if necessary, a cutting surface are specified in a three-dimensional medical image. Plural anatomical structures present within a predetermined range from the specified position are extracted, as structures to be separated, by referring to a structure information storage unit that stores plural anatomical structures and a separation condition storage unit that stores a separation condition for each anatomical structure of a subject to determine, based on the specified position, a boundary surface and, if necessary, a cutting surface for separately displaying the plural anatomical structures. The boundary surface corresponding to the structures to be separated and the specified position and, if necessary, the cutting surface are set based on the separation condition. A three-dimensional medical image in which the structures to be separated are separated by the boundary surface and, if necessary, by the cutting surface is generated, and displayed. 112-. (canceled)13. An image diagnosis support apparatus comprising:a three-dimensional medical image data storage unit that stores three-dimensional medical image data of a subject;a display unit that displays a three-dimensional medical image based on the stored three-dimensional medical image data;a structure information storage unit that stores a plurality of anatomical structures included in the three-dimensional medical image data;a specifying unit that specifies, in the three-dimensional medical image displayed by the display unit, at least one specified position and, if necessary, a cutting surface for separating the plurality of anatomical structures included in the three-dimensional medical image;a separation condition storage unit that stores a separation condition for each anatomical structure of the subject to determine, based on the specified position specified by the specifying unit, a boundary surface and, if necessary, a cutting surface for separately displaying the ...

Подробнее
28-03-2013 дата публикации

MEASUREMENT APPARATUS AND CONTROL METHOD

Номер: US20130077854A1
Принадлежит: CANON KABUSHIKI KAISHA

A measurement apparatus which measures the relative position and orientation of an image-capturing apparatus capturing images of one or more measurement objects with respect to the measurement object, acquires a captured image using the image-capturing apparatus. The respective geometric features present in a 3D model of the measurement object are projected onto the captured image based on the position and orientation of the image-capturing apparatus, thereby obtaining projection geometric features. Projection geometric features are selected from the resultant projection geometric features based on distances between the projection geometric features in the captured image. The relative position and orientation of the image-capturing apparatus with respect to the measurement object is then calculated using the selected projection geometric features and image geometric features corresponding thereto detected in the captured image. 1an image acquiring unit configured to acquire a captured image from the image-capturing apparatus;a projection unit configured to project geometric features of a 3D model of the measurement object onto the captured image based on a position and orientation of the image-capturing apparatus to obtain projection geometric features;a selecting unit configured to select projection geometric features to be used in calculation of the position or orientation from the projection geometric features obtained by the projection unit based on distances with respect to the projection geometric features in the captured image; anda calculating unit configured to calculate the relative position or orientation of the image-capturing apparatus with respect to the measurement object using the projection geometric features selected by the selecting unit and image geometric features corresponding to the selected projection geometric features detected in the captured image.. A measurement apparatus for measuring relative position or orientation of an image- ...

Подробнее
18-04-2013 дата публикации

SHAPE MEASURING DEVICE, SHAPE MEASURING METHOD, AND METHOD FOR MANUFACTURING GLASS PLATE

Номер: US20130094714A1
Автор: OHTO Kimiaki
Принадлежит: Asahi Glass Company, Limited

The present invention provides a technology capable of measuring three-dimensional shapes by applying a stereo method even in the case that an object has a specular surface. A shape measuring apparatus is equipped with a pattern position specification section (before-movement pattern position specification section, after-movement pattern position specification section), an image capturing position calculation section (before-movement image capturing position calculation section, after-movement image capturing calculation section), a pixel area specification section (second pixel area specification section), an inclination angle calculation section (before-movement inclination angle calculation section, after-movement inclination angle calculation section), a height-direction coordinate determination section and an output section 1. A shape measuring apparatus comprising:a calculation section configured to: calculate an inclination angle at one position on a specular surface of an object to be measured at a time when a height direction coordinate of the object to be measured is assumed as one height direction coordinate on based on a captured image which is obtained by capturing an image of the specular surface of the object to be measured so that a shape thereof is measured and in which a reflected image of a pattern disposed at a periphery of the object to be measured is captured; and calculate an inclination angle at the same position when the height direction coordinate is assumed as the same height direction coordinate based on another captured image obtained similarly after the object to be measured is moved by a predetermined amount; anda determination section configured to: compare both the inclination angles at the position before and after the object to be measured is moved by the predetermined amount; anddetermine the height direction coordinate at the time of coincidence as the height direction coordinate at the position of the object to be measured.2. A ...

Подробнее
02-05-2013 дата публикации

STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD

Номер: US20130108150A1
Принадлежит:

A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points. 1. A system for obtaining measurements of an object , the system comprising at least one processor , wherein the processor is configured to:store a plurality of stereo images each comprising first and second images of the object;combine at least two stereo images into a composite stereo image, wherein the composite stereo image comprises a composite first image and a composite second image, the composite first image comprises a composite of the first images of each of the at least two stereo images, and the composite second image comprises a composite of the second images of each of the at least two stereo images;designate composite points in the first and second images of each of the at least two stereo images;designate a first measurement point and a second measurement point in the composite first image;designate the first measurement point and a second measurement point in the composite second image;define a first stereo point that corresponds to the first measurement point designated in the composite first and second images and to define a second stereo point that corresponds to the second measurement point designated in the composite first and second images; andcalculate the distance between the first stereo point and the second stereo point.2. The system of wherein the processor is further configured to: ...

Подробнее
02-05-2013 дата публикации

RECOVERING 3D STRUCTURE USING BLUR AND PARALLAX

Номер: US20130108151A1
Принадлежит:

A system and method for generating a focused image of an object is provided. The method comprises obtaining a plurality of images of an object, estimating an initial depth profile of the object, estimating a parallax parameter and a blur parameter for each pixel in of the plurality of images and generating a focused image and a corrected depth profile of the object using a posterior energy function. The posterior energy function is based on the estimated parallax parameter and the blur parameter of each pixel in the plurality of images. 120-. (canceled)21. A method for generating a focused image of an object from a first image of the object and a second image of the object , the method comprising:estimating a parallax parameter based on motion of at least one pixel in the first image relative to the second image;estimating a blur parameter based on a change in defocus of the at least one pixel from the first image to the second image;determining a posterior energy function based on the parallax parameter and the blur parameter; andgenerating at least one of a focused image of the object and a depth profile of the object using the posterior energy function.22. The method of claim 21 , wherein a first portion of the object appears in focus in the first image and a second portion of the object appears in focus in the second image.23. The method of claim 21 , wherein determining the posterior energy function comprises:generating a blur map based on the blur parameter.24. The method of claim 21 , further comprising:obtaining at least one of the first image and the second image from a memory.25. The method of claim 21 , further comprising:obtaining the first image with an imaging system;causing a change in distance between the object and a focal plane of the imaging system; andobtaining the second image with the imaging system.26. The method of claim 25 , wherein causing the change in distance between the object and the imaging system comprises:moving at least one of the ...

Подробнее
23-05-2013 дата публикации

OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND PROGRAM

Номер: US20130129148A1
Принадлежит: Panasonic Corporation

An object detection device that can accurately identify an object candidate in captured stereo images as an object or a road surface. The object detection device () have a disparity map generator () that generates a disparity map based on the stereo images; a road surface estimator () that estimates a road surface based on the disparity map; an object candidate location extractor () that extracts an object candidate region above the road surface, based on the disparity map and the road surface; an object identifying region extractor () that extracts an object identifying region including a region around the object candidate region; a geometric feature extractor () that extracts a geometric feature of the object candidate based on the object identifying region; and an object identifying unit () that identifies whether the object candidate is an object or a road surface based on the geometric feature. 115-. (canceled)16. An object detecting device that detects an object on a road surface included in stereo images , comprising:a disparity map generator that generates a disparity map on the basis of the stereo images;a road surface estimator that estimates a road surface region on the basis of the disparity map;an object candidate location extractor that extracts pieces of disparity data above the road surface region from the disparity map, as an object candidate region where an object candidate is present;an object identifying region extractor that extracts an object identifying region from the disparity map, the object identifying region including the object candidate region and having a width larger than the width of the object candidate region by a predetermined scale factor;a geometric feature extractor that extracts a geometric feature in the object identifying region; andan object identifying unit that identifies whether the object candidate is an object or a road surface on the basis of the geometric feature.17. The object detecting device according to claim 16 ...

Подробнее
30-05-2013 дата публикации

System and method for 3D imaging using structured light illumination

Номер: US20130136318A1

A biometrics system captures and processes a handprint image using a structured light illumination to create a 2D representation equivalent of a rolled inked handprint. A processing unit calculates 3D coordinates of the hand from the plurality of images and maps the 3D coordinates to a 2D flat surface to create a 2D representation equivalent of a rolled inked handprint. 1. A method for a processing device to determine a two dimensional (2D) handprint image from a three dimensional (3D) data of the handprint , comprising:processing one or more handprint images captured with a structured light illumination technique to determine 3D coordinates in the one or more handprint images;extracting by the processing device handprint surface information using the 3D coordinates, wherein the handprint surface information includes ridge height information;generating a 2D handprint image from a set of the 3D coordinates;mapping by the processing device the handprint surface information onto the 2D handprint image; andtranslating the ridge height information to a grey-scale image index.2. The method of claim 1 , wherein extracting by the processing device handprint surface information using the 3D coordinates of the handprint comprises:generating a smooth handprint surface that approximates a shape of the handprint;determining surface normal vectors at a plurality of points in the smooth handprint surface; andcomparing the surface normal vectors at the plurality of points in the smooth handprint surface with the 3D coordinates in the one or more handprint images to determine the handprint surface information at the plurality of points.3. The method of claim 2 , further comprising:calculating a magnitude of a difference vector between the 3D coordinates of one of the plurality of points in the one or more handprint images and a corresponding point in the smooth handprint surface to determine the ridge height information.4. The method of claim 1 , wherein generating the 2D handprint ...

Подробнее
06-06-2013 дата публикации

TARGET LOCATING METHOD AND A TARGET LOCATING SYSTEM

Номер: US20130141540A1
Принадлежит: SAAB AB

A target locating method and a target locating system. Images of a target area are recorded utilizing recording devices carried by a vehicle. The recorded images of the target area are matched with a corresponding three dimensional area of a three dimensional map including transferring a target indicator from the recorded images of the target area to the three dimensional map of the corresponding target area. The coordinates of the target indicator position are read in the three dimensional map. The read coordinates of the target indicator position are made available for position requiring equipment. 1. A target locating method , the method comprising:recording images of a target area utilizing recording devices carried by a vehicle,making a three dimensional image of the target area available from a stored three dimensional map covering the target area and surroundings of the target area,matching the recorded images of the target area with a corresponding three dimensional area of the three dimensional map comprising transferring a target indicator from the recorded images of the target area to the three dimensional map of the corresponding target area,reading the coordinates of the target indicator position in the three dimensional map, andmaking the read coordinates of the target indicator position available for position requiring equipment.2. The method according to claim 1 , wherein the making claim 1 , matching claim 1 , reading and making are carried out in a ground based system.3. The method according to claim 2 , wherein the vehicle carrying recording devices is separated from the ground based system.4. The method according to claim 1 , wherein the read coordinates are made available for position requiring equipment operating as a target combating equipment.5. The method according to claim 1 , wherein the recording devices are carried by an unmanned aerial vehicle.6. The method according to claim 1 , wherein the recording of images of a target area ...

Подробнее
06-06-2013 дата публикации

MONOCULAR 3D POSE ESTIMATION AND TRACKING BY DETECTION

Номер: US20130142390A1
Принадлежит:

Methods and apparatus are described for monocular 3D human pose estimation and tracking, which are able to recover poses of people in realistic street conditions captured using a monocular, potentially moving camera. Embodiments of the present invention provide a three-stage process involving estimating () a 3D pose of each of the multiple objects using an output of 2D tracking-by detection () and 2D viewpoint estimation (). The present invention provides a sound Bayesian formulation to address the above problems. The present invention can provide articulated 3D tracking in realistic street conditions. 115-. (canceled)1742. The image processor of claim 16 , further comprising one or more part based detectors () for detecting parts of the multiple objects for supply to the 2D pose detector.18. The image processor of claim 17 , wherein the one or more part based detectors make use of a pictorial structure model of the object and/or wherein the one or more part based detectors are viewpoint specific detectors.1948. The image processor of further comprising an SVM detector claim 17 , the output of the one or more part based detectors being fed to the SVM detector claim 17 , or further comprising a classifier () claim 17 , the output of the one or more part based detectors being fed to the classifier.2052. The image processor of claim 16 , the 2D tracking and viewpoint estimation computation part comprising a tracklet extractor ().21. The image processor of claim 20 , further comprising a viewpoint estimator for estimating a sequence of viewpoints of each tracklet obtained from the tracklet extractor.23. The method of claim 22 , wherein estimating the 2D pose comprises detecting parts of each of the multiple objects in the image.24. The method of claim 22 , wherein detecting parts of the multiple objects makes use of a pictorial structure model of each of the multiple objects and/or wherein detecting parts of the multiple objects is viewpoint specific.25. The method of ...

Подробнее
20-06-2013 дата публикации

IMAGE INFORMATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING THE SAME

Номер: US20130156268A1
Автор: Sonoda Tetsuri
Принадлежит: CANON KABUSHIKI KAISHA

An image information processing apparatus performs three-dimensional measurement of an object using a captured image obtained by projecting onto the object a projection pattern containing a two-dimensional symbol sequence that is obtained by assigning a predetermined symbol to each code in a projection code string in which a plurality of types of codes are arranged two-dimensionally and capturing an image of the object. The apparatus obtains an imaging pattern by extracting a symbol sequence from the captured image, and converts symbol dots in the imaging pattern into corresponding codes, thereby obtaining an imaging code string. The apparatus obtains a predetermined number of codes according to one sampling feature selected from a plurality of types of sampling features, generates an information code string by arranging the obtained codes, and determining the correspondence between the information code string and a part of the projection code string, thereby performing three-dimensional measurement. 1. An image information processing apparatus that performs three-dimensional measurement of an object using a captured image obtained by projecting onto the object a projection pattern obtained by assigning , to each code in a projection code string in which a plurality of types of codes are two-dimensionally arranged , a symbol that differs from one type of code to another , the apparatus comprising:obtaining means for obtaining an imaging pattern by extracting symbols from the captured image;converting means for converting each symbol in the imaging pattern obtained by the obtaining means into a corresponding code to obtain an imaging code string;generating means for generating an information code string by obtaining a predetermined number of codes from the imaging code string according to a sampling feature in which sampling positions of the predetermined number of codes and a sequence of the codes are defined and arranging the obtained codes; andmeasuring means for ...

Подробнее
11-07-2013 дата публикации

SYSTEM AND METHOD FOR IDENTIFYING AN APERTURE IN A REPRESENTATION OF AN OBJECT

Номер: US20130177234A1
Принадлежит:

An iterative process for determining an aperture in a representation of an object is disclosed. The object is received and a bounding box corresponding thereto is determined. The bounding box includes a plurality of initial voxels and the object is embedded therein. An intersecting set of initial voxels is determined, as well as an internal set and an external set of initial voxels. The resolution of the voxels is iteratively decreased until the ratio of internal voxels to external voxels exceeds a predetermined threshold. The voxels corresponding to the final iteration are the final voxels. An internal set of final voxels is determined. A union set of initial voxels is determined indicating an intersection between the external set of initial voxels and the internal set of final voxels. From the union set of initial voxels and the external set of initial voxels, a location of an aperture is determined. 1. A method for identifying an aperture in a three-dimensional (3D) representation of an object , the method comprising:receiving a plurality of two-dimensional (2D) triangles representing the object;determining a 3D bounding box having dimensions sufficient to encapsulate the object, the 3D bounding box including a plurality of initial voxels for the 3D bounding box, wherein initial voxels of the plurality each have equal initial dimensions;determining an initial intersecting set of initial voxels from the plurality of initial voxels, wherein each initial voxel of the initial intersecting set of initial voxels intersects with at least one of the plurality of 2D triangles;determining an initial external set of initial voxels from the plurality of initial voxels, the initial external set of initial voxels being exclusive from the initial intersecting set of initial voxels and not wholly encapsulated by voxels from the initial intersecting set of initial voxels;determining a plurality of final voxels corresponding to the bounding box, wherein each of the plurality of ...

Подробнее
11-07-2013 дата публикации

METHOD AND APPARATUS FOR PROCESSING DEPTH IMAGE

Номер: US20130177236A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An apparatus and method for processing a depth image. A depth image may be generated with reduced noise and motion blur, using depth images generated during different integration times that are generated based on the noise and motion blur of the depth image. 1. A method of processing a depth image , the method comprising:determining at least one spatio-temporal neighboring pixel of a pixel of an input depth image;calculating a weight value of the at least one spatio-temporal neighboring pixel; andgenerating an output depth image by updating a depth value of the pixel of the input depth image based on a depth value and the calculated weight value of the at least one spatio-temporal neighboring pixel.2. The method of claim 1 , wherein the input depth image corresponds to an intermediate input depth image claim 1 , andthe determining comprises:identifying a first correspondence between a previous input depth image and a next input depth image;detecting a motion vector based on the identified first correspondence;estimating a second correspondence between the intermediate input depth image and one of the previous and next input depth images based on the detected motion vector; anddetermining at least one spatio-temporal neighboring pixel of a pixel of the intermediate input depth image based on the estimated second correspondence,wherein the previous input depth image corresponds to an input depth image preceding the intermediate input depth image in time, and the next input depth image corresponds to an input depth image following the intermediate input depth image in time.3. The method of claim 2 , wherein an integration time of the previous input depth image and an integration time of the next input depth image are shorter than an integration time of the intermediate input depth image.4. The method of claim 2 , wherein the identifying comprises:calculating an optical flow between a previous infrared (IR) intensity image and a next IR intensity image; andidentifying ...

Подробнее
01-08-2013 дата публикации

IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND COMPUTER PROGRAM PRODUCT

Номер: US20130195350A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to an embodiment, an image encoding device according to an embodiment includes an image generating unit, a first filtering unit, a prediction image generating unit, and an encoding unit. The image generating unit is configured to generate a first parallax image corresponding to a first viewpoint of an image to be encoded, with the use of at least one of depth information and parallax information of a second parallax image corresponding to a second viewpoint being different than the first viewpoint. The first filtering unit is configured to perform filtering on the first parallax image based on first filter information. The prediction image generating unit is configured to generate a prediction image with a reference image, the reference image being the first parallax image on which the filtering has been performed. The encoding unit is configured to generate encoded data from the image and the prediction image. 1. An image encoding device comprising:an image generating unit configured to generate a first parallax image corresponding to a first viewpoint of an image to be encoded, with the use of at least one of depth information and parallax information of a second parallax image corresponding to a second viewpoint being different than the first viewpoint;a first filtering unit configured to perform filtering on the first parallax image based on first filter information;a prediction image generating unit configured to generate a prediction image with a reference image, the reference image being the first parallax image on which the filtering has been performed; andan encoding unit configured to generate encoded data from the image and the prediction image.2. The device according to claim 1 , wherein the encoding unit further encodes the first filter information and appends the encoded first filter information to the encoded data.3. The device according to claim 2 , further comprising a second filtering unit configured to perform filtering on the second ...

Подробнее
08-08-2013 дата публикации

AUTOMATED VASCULAR REGION SEPARATION IN MEDICAL IMAGING

Номер: US20130202170A1
Принадлежит:

A system and/or method automatically identifies one or more vascular regions in a medical image or set of medical images. For example, the system/method may automatically identify vascular structures as belonging to the left carotid, right carotid, and/or basilar vascular regions in the head. The system/method takes as input the medical image(s) and automatically identifies one or more vascular regions. The system/method may also automatically generate MIP renderings of the identified region or regions. 1. A method comprising:determining a probability for each voxel in a patient-specific image data set that the voxel belongs to one or more vascular regions of interest;segmenting patient-specific vasculature in the patient-specific image to generate a set of nodes and edges representative of the patient-specific vasculature;classifying each node and edge based on one or more statistics associated with each node and edge; anddetermining to which of the one or more vascular regions of interest each voxel in the patient-specific image data set belongs based on the probability and the classifications.2. The method of wherein determining to which of the one or more vascular regions of interest each voxel belongs comprises determining whether each voxel in the patient-specific image data set belongs to a left carotid vascular region claim 1 , a right carotid vascular region or a basilar vascular region.3. The method of further comprising automatically generating a MIP rendering of the vascular region of interest based on the association.4. The method of further comprising displaying the MIP rendering on a user interface.5. The method of wherein determining a probability for each voxel in a patient-specific image data set that the voxel belongs to one or more vascular regions of interest comprises assigning each voxel in the patient-specific image data set a patient-specific probability based on a corresponding voxel in each of one or more probabilistic atlases claim 1 , ...

Подробнее
08-08-2013 дата публикации

System and Method for Manipulating Data Having Spatial Co-ordinates

Номер: US20130202197A1
Принадлежит:

Systems and methods are provided for extracting various features from data having spatial coordinates. The systems and methods may identify and extract data points from a point cloud, where the data points are considered to be part of the ground surface, a building, or a wire (e.g. power lines). Systems and methods are also provided for enhancing a point cloud using external data (e.g. images and other point clouds), and for tracking a moving object by comparing images with a point cloud. An objects database is also provided which can be used to scale point clouds to be of similar size. The objects database can also be used to search for certain objects in a point cloud, as well as recognize unidentified objects in a point cloud. 1. A method for a computing device to enhance a set of data points with three-dimensional spatial coordinates using an image captured by a camera device , the method comprising:the computing device obtaining the image, the image comprising pixels, each of the pixels associated with a data value;the computing device generating mapping information for associating one or more data points and one or more corresponding pixels; andthe computing device modifying the set of data points using the mapping information and the data values of the one or more corresponding pixels.2. The method of claim 1 , wherein generating mapping information comprises:obtaining one or more interior orientation parameters of the camera device;obtaining one or more exterior orientation parameters of the camera device; andprojecting a line of sight from the one or more data points onto the one or more corresponding pixels using at least one of the one or more interior orientation parameters and the one or more exterior orientation parameters.3. The method of claim 1 , wherein modifying the set of data points using the mapping information comprises associating one or more data points with the data value of the corresponding pixel.4. The method of claim 1 , wherein ...

Подробнее
05-09-2013 дата публикации

Method for Reconstruction of Multi-Parameter Images of Oscillatory Processes in Mechanical Systems

Номер: US20130230233A1
Принадлежит:

A method for investing vibration processes in elastic mechanical systems. The technical result of the proposed invention is the creation of a spectral set of multidimensional images, mapping time-related three-dimensional vector parameters of metrological, and/or design-analytical, and/or design vibration parameters of mechanical systems. Reconstructed images with various dimensionality that are integrated in various combinations depending on the target function can be used as a homeostatic portrait or a cybernetic image of vibration processes in mechanical systems for objective evaluation of current operating conditions in real time. The invention can be widely used for improving the effectiveness of monitoring and investigating vibration processes in mechanical systems (objects) in the fields of mechanical engineering, construction, acoustics, etc. 1. A method for reconstructing multi-parameter images of vibration processes in mechanical systems comprising the steps of measuring vibration parameters of mechanical systems in a specified frequency range using a 3D transducer of mechanical vibrations, and determining the deformation vector of an element of an investigated object at the point of transducer installation, wherein the method further comprises the steps of creating a 3D model of the investigated object, mapping the vibration processes' spectrum of measured and physically related design vector and scalar parameters on the model, forming contour characteristics of reconstructed parameters, approximating three dimensions of continuously measured parameters at specified discrete points of elements of the investigated object, creating graphic cuts and sections of the contour characteristics of the reconstructed parameters, and graphically extracting local zones and fronts of diagnostic parameters based on specified criteria which makes it possible to create multi-parameter cybernetic images of vibration processes of a mechanical system's investigated objects. ...

Подробнее
12-09-2013 дата публикации

LEARNING-BASED ESTIMATION OF HAND AND FINGER POSE

Номер: US20130236089A1
Принадлежит: PRIMESENSE LTD.

A method for processing data includes receiving a depth map of a scene containing a human hand, the depth map consisting of a matrix of pixels having respective pixel depth values. The method continues by extracting from the depth map respective descriptors based on the depth values in a plurality of patches distributed in respective positions over the human hand, and matching the extracted descriptors to previously-stored descriptors in a database. A pose of the human hand is estimated based on stored information associated with the matched descriptors. 1. A method for processing data , comprising:receiving a depth map of a scene containing a human hand, the depth map comprising a matrix of pixels having respective pixel depth values;extracting from the depth map respective descriptors based on the depth values in a plurality of patches distributed in respective positions over the human hand;matching the extracted descriptors to previously-stored descriptors in a database; andestimating a pose of the human hand based on stored information associated with the matched descriptors.2. The method according to claim 1 , wherein estimating the pose comprises applying kinematics based on anatomical constraints of the hand in processing the descriptors.3. The method according to claim 1 , and comprising receiving a color or grayscale image of the human hand claim 1 , wherein extracting the descriptors comprises incorporating information from the color or grayscale image in the descriptors together with the depth values.4. The method according to claim 1 , wherein estimating the pose comprises detecting that a part of the hand is occluded in the depth map claim 1 , and excluding the occluded part from estimation of the pose.5. The method according to claim 4 , wherein estimating the pose comprises choosing a most anatomically probable hand configuration in response to detecting that the part of the hand is occluded.6. The method according to claim 1 , wherein estimating the ...

Подробнее
26-09-2013 дата публикации

Applying Perceptually Correct 3D Film Noise

Номер: US20130251241A1

Perceptually correct noises simulating a variety of noise patterns or textures may be applied to stereo image pairs each of which comprises a left eye (LE) image and a right eye (RE) image that represent a 3D image. LE and RE images may or may not be noise removed. Depth information of pixels in the LE and RE images may be computed from, or received with, the LE and RE images. Desired noise patterns are modulated onto the 3D image or scene so that the desired noise patterns are perceived to be part of 3D objects or image details, taking into account where the 3D objects or image details are on a z-axis perpendicular to an image rendering screen on which the LE and RE images are rendered. 1. A method comprising:accessing a left eye (LE) image and a right eye (RE) image that represent a 3D image;filtering the LE image and the RE image to reduce undesirable noise;determining, based on depth information relating to the LE and RE image, one or more first depths of a first 3D image feature in the 3D image; andapplying, after the filtering, one or more noise patterns to the first 3D image feature in the 3D image based on the one or more first depths of the first 3D image feature in the 3D image.2. The method of claim 1 , further comprising:determining, based on the depth information relating to the LE and RE image, one or more second depths of a second 3D image feature in the 3D image; andapplying the one or more noise patterns to the second 3D image feature in the 3D image based on the one or more second depths of the second 3D image feature in the 3D image;wherein the one or more noise patterns are applied to the first 3D image feature with one or more first spatial frequency components that are different from one or more second spatial frequency components with which the one or more noise patterns are applied to the second 3D image feature.3. The method of claim 2 , wherein the first 3D image feature is closer to a viewer of the 3D image than the second 3D image feature ...

Подробнее
10-10-2013 дата публикации

THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS AND THREE-DIMENSIONAL IMAGE PROCESSING METHOD

Номер: US20130266213A1
Принадлежит:

A three-dimensional image processing apparatus includes an obtainer that obtains three-dimensional image information including information of a first image and a second image, a shade information obtainer that obtains shade information from the information of the first image and/or the second image, and a disparity adjuster that adjusts a disparity of a subject contained in the first and the second images based on the shade information. 1. A three-dimensional image processing apparatus comprising:an obtainer that obtains three-dimensional image information including information of a first image and a second image;a shade information obtainer that obtains shade information from the information of the first image and/or the second image; anda disparity adjuster that adjusts a disparity of a subject contained in the first and the second images based on the shade information.2. The three-dimensional image processing apparatus according to claim 1 , whereinthe shade information obtainer comprises a low-frequency eliminator that eliminates predetermined low-frequency components from the information of the first image and/or the second image, andthe shade information obtainer obtains the shade information based on the information of the image of which predetermined low-frequency components are eliminated.3. The three-dimensional image processing apparatus according to claim 2 , whereinthe shade information obtainer further comprises a large amplitude eliminator that further eliminates components exceeding predetermined amplitude from the information of the image of which predetermined low-frequency components are eliminated, andthe shade information obtainer obtains the shade information based on the information of the image of which components exceeding the predetermined amplitude are eliminated.4. The three-dimensional image processing apparatus according to claim 3 , whereinthe shade information obtainer further comprises a high-frequency eliminator that further ...

Подробнее
17-10-2013 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20130272578A1
Автор: Watanabe Daisuke
Принадлежит: CANON KABUSHIKI KAISHA

An information processing apparatus includes a first setting unit setting a relative position-posture relationship between a 3D-shaped model of an object and a viewpoint from which the model is observed as a base position-posture, a detector detecting geometric features of the model observed from the viewpoint in the base position-posture as base geometric features, a second setting unit setting a relative position-posture relationship between the model and a viewpoint as a reference position-posture, a retrieval unit retrieving reference geometric features corresponding to the base geometric features of the model observed from the viewpoint in the reference position-posture, a first calculation unit calculating similarity degrees between the base geometric features and the reference geometric features, and a second calculation unit calculating evaluation values of correspondences between the base geometric features and the reference geometric features in accordance with the similarity degrees. 1. An information processing apparatus comprising:a first setting unit configured to set a relative position-and-posture relationship between a three-dimensional-shaped model of an object and a viewpoint from which the three-dimensional-shaped model is observed as a base position-posture;a detector configured to detect geometric features of the three-dimensional-shaped model observed from the viewpoint in the base position-posture as base geometric features;a second setting unit configured to set a relative position-and-posture relationship between the three-dimensional-shaped model of the object and a viewpoint from which the three-dimensional-shaped model is observed as a reference position-posture which is different from the base position-posture;a retrieval unit configured to retrieve reference geometric features corresponding to the base geometric features of the three-dimensional-shaped model observed from the viewpoint in the reference position-posture;a first ...

Подробнее
24-10-2013 дата публикации

Method for Enabling Authentication or Identification, and Related Verification System

Номер: US20130279765A1
Принадлежит: MORPHO

The invention relates to a method for enabling the authentication or identification of a person () using a first electronic device () comprising an image-capturing unit and a data-transmission unit, the method including a step of registering said person in a verification system (). The registration step includes the steps of: capturing, using the image-capturing unit of said electronic device, a first image (h) of at least one object (O) of any kind that is secretly selected by the person; and transmitting said first image to the verification system by means of said data transmission device of said first electronic device. 1. A method enabling the authentication or identification of a person , using a first electronic device comprising an image-capturing unit and a data transmission unit , said method including a phase of registering said person with a verification system , said registration phase comprising the following steps:capturing, using an image-capturing unit of said first electronic device, a first image of at least one object of any kind that is secretly selected by the person;transmitting said first image to the verification system by means of the data transmission unit of said first electronic device.2. The method according to claim 1 , wherein the object chosen by the person is an object that the person always has on him or within reach.3. The method according to claim 1 , wherein the first image is sent to the verification system in a secure manner.4. The method according to claim 1 , wherein at least a first personal data item for said person is sent to the verification system as a supplement to the first image.5. The method according to claim 1 , further including an authentication or identification phase relative to said person claim 1 , using a second electronic device claim 1 , the authentication or identification phase comprising the following steps carried out in the verification system:receiving from the second electronic device a second image ...

Подробнее
31-10-2013 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND COMPUTER PROGRAM PRODUCT

Номер: US20130287292A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to an embodiment, an image processing apparatus includes an obtaining unit and an image processing unit. The obtaining unit is configured to obtain depth information for each position in an image. The image processing unit is configured to switch between a first sharpening process and a second sharpening process in accordance with whether the image contains a predetermined area. The first sharpening process performs non-uniform sharpening on the image on the basis of the depth information; and the second sharpening process performs uniform sharpening on the image. 1. An image processing apparatus comprising:an obtaining unit configured to obtain depth information for each position in an image; andan image processing unit configured to switch between a first sharpening process and a second sharpening process in accordance with whether the image contains a predetermined area, the first sharpening process performing non-uniform sharpening on the image on the basis of the depth information, the second sharpening process performing uniform sharpening on the image.2. The apparatus according to claim 1 , further comprising a sky determining unit configured to determine whether the image contains a sky claim 1 , whereinthe image processing unit performs the first sharpening process on the image when the sky determining unit determines that the image contains the sky.3. The apparatus according to claim 2 , further comprising a boundary determining unit configured to determine whether the image contains a boundary line between an area whose depth continuously changes and an area whose depth is constant claim 2 , whereinthe image processing unit performs the first sharpening process on the image when the sky determining unit determines that the image contains the sky and the boundary determining unit determines that the image contains the boundary line.4. The apparatus according to claim 2 , whereinthe sky determining unit includes a reliability calculator configured ...

Подробнее
14-11-2013 дата публикации

APPARATUS AND METHOD FOR PROCESSING 3D INFORMATION

Номер: US20130301907A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An apparatus and method for processing three-dimensional (3D) information is described. The 3D information processing apparatus may measure first depth information of an object using a sensor apparatus such as a depth camera, may estimate a foreground depth of the object, a background depth of a background, and a degree of transparency of the object, may estimate second depth information of the object based on the estimated foreground depth, background depth, and degree of transparency, and may determine the foreground depth, the background depth, and the degree of transparency through comparison between the measured first depth information and the estimated second depth information. 1. An apparatus for processing three-dimensional (3D) information , the apparatus comprising:a processor to control one or more processor-executable units;a measuring unit to measure first depth information of an object;an estimating unit to estimate second depth information of the object;a comparing unit to compare the measured first depth information and the estimated second depth information; anda determining unit to determine third depth information of the object based on the comparison result.2. The apparatus of claim 1 , wherein the estimating unit estimates a foreground depth of the object claim 1 , a background depth of a background claim 1 , and a degree of transparency of the object claim 1 , and estimates the second depth information through predetermined modeling based on the estimated foreground depth claim 1 , the background depth claim 1 , and the degree of transparency.3. The apparatus of claim 2 , wherein the estimating unit estimates the second depth information based on foreground depth information calculated using a first reflected signal that is reflected from the object at the estimated foreground depth claim 2 , background depth information calculated using a second reflected signal that passes through the object and is reflected from the background at the ...

Подробнее
28-11-2013 дата публикации

Image processing device and image processing method

Номер: US20130315473A1
Автор: Yoshitomo Takahashi
Принадлежит: Sony Corp

The present technique relates to an image processing device and an image processing method that enable generation of high-quality color images and depth images of the viewpoints other than the reference point on the receiving end even if the precision of the reference-point depth image is low when the occlusion regions of color images and depth images of the viewpoints other than the reference point are transmitted. A warping unit performs a foreground-prioritized warping operation toward the left viewpoint on the reference-point depth image. Using the reference-point depth image of the left viewpoint obtained as a result of the warping operation, an occlusion determining unit detects a left-viewpoint occlusion region that appears when a viewpoint is converted from the reference point to the left viewpoint. The present technique can be applied to 3D image processing devices, for example.

Подробнее
28-11-2013 дата публикации

METHOD FOR GENERATING, TRANSMITTING AND RECEIVING STEREOSCOPIC IMAGES, AND RELATED DEVICES

Номер: US20130315474A1

A method for generating a composite image of a stereoscopic video stream includes a pair of a right image and a left image of a scene, the right image and the left image being such that, when viewed by a spectator's right eye and left eye, respectively, they cause the spectator to perceive the scene as being three-dimensional, the method includes the steps of: generating a composite image including all the pixels of the pair of right and left images, defining a grid of macroblocks of the composite image, each macroblock of the grid including a plurality of adjacent pixels, decomposing one image of the pair of right and left images into a plurality of component regions including a plurality of contiguous pixels, processing the component regions in a manner such as to generate corresponding derived regions, the derived regions including at least all the pixels of a corresponding component region and being such that they can be decomposed into an integer number of macroblocks, arranging the non-decomposed image of the pair and the plurality of derived regions in the composite image in a manner such that all the edges of the non-decomposed image and of the derived regions coincide with edges of macroblocks of the grid. 1. A method for generating a composite image of a stereoscopic video stream comprising a pair of a right image (R) and a left image (L) of a scene , said right image (R) and left image (L) being such that , when viewed by a spectator's right eye and left eye , respectively , they cause the spectator to perceive the scene as being three-dimensional , said method comprising the steps of:generating a composite image (C) comprising all the pixels of the pair of right (R) and left (L) images,defining a grid of macroblocks of the composite image (C), each macroblock of said grid comprising a plurality of adjacent pixels,decomposing one image of said pair of right image and left image into a plurality of component regions (Ri) comprising a plurality of ...

Подробнее
05-12-2013 дата публикации

Image processing apparatus and method for three-dimensional (3d) image

Номер: US20130322738A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An image processing apparatus and method for a three-dimensional (3D) image is provided. The image processing apparatus may include a parameter setting unit to set a first parameter related to a color image, and a parameter determining unit to determine an optimal second parameter related to a depth image, using the first parameter.

Подробнее
02-01-2014 дата публикации

METHOD AND SYSTEM FOR ENSURING STEREO ALIGNMENT DURING PIPELINE PROCESSING

Номер: US20140003706A1
Принадлежит:

Apparatus and methods are provided to implement a technique for adjusting images, such as for addressing lens distortions. In one implementation, a computer system uses un-warping to address differences between two camera lenses. After unwarping two stereo images, the images are re-warped but re-warped using a common set of parameters. 1. A method for adjusting images , comprising:a. recording a first image with a first camera having a first lens distortion;b. recording a second image with a second camera having a second lens distortion;c. unwarping the first image with a first lens distortion transformation;d. unwarping the second image with a second lens distortion transformation;e. rewarping the first image using a common lens distortion transformation; andf. rewarping the second image using the common lens distortion transformation.2. The method of claim 1 , wherein the common lens distortion transformation is an inverse transformation of the first lens distortion transformation or the second lens distortion transformation.3. The method of claim 1 , wherein the common lens distortion transformation is a linear combination of an inverse of the first lens distortion transformation and an inverse of the second lens distortion transformation.4. The method of claim 3 , wherein the common lens distortion transformation is an inverse of an average of the first lens distortion transformation and the second lens distortion transformation.5. The method of claim 1 , further comprising changing the first or second images claim 1 , or both claim 1 , before warping the first image.6. The method of claim 5 , wherein the changing includes a step of match-moving.7. The method of claim 5 , wherein the changing includes a step of plate preparation.8. The method of claim 5 , wherein the changing includes a step of animating.9. The method of claim 5 , wherein the changing includes a step of compositing.10. The method of claim 5 , wherein the changing includes a step of rendering.11. ...

Подробнее
16-01-2014 дата публикации

Method and apparatus for estimating image motion using disparity information of a multi-view image

Номер: US20140015936A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method and apparatus for processing a multi-view image is provided. The method includes: extracting disparity information between an image of a first point of view and an image of a second point of view; and estimating a motion between two sequential images of the first point of view or the second point of view using the extracted disparity information. The apparatus may include a processor which is configured to extract disparity information between an image of a first point of view and an image of a second point of view in the multi-view image and is further configured to estimate a motion using the extracted disparity information between two sequential images of the first point of view or the second point of view.

Подробнее
06-02-2014 дата публикации

LEARNING-BASED POSE ESTIMATION FROM DEPTH MAPS

Номер: US20140037191A1
Автор: Litvak Shai
Принадлежит: PRIMESENSE LTD.

A method for processing data includes receiving a depth map of a scene containing a humanoid form. Respective descriptors are extracted from the depth map based on the depth values in a plurality of patches distributed in respective positions over the humanoid form. The extracted descriptors are matched to previously-stored descriptors in a database. A pose of the humanoid form is estimated based on stored information associated with the matched descriptors. 1. A method for processing data , comprising:receiving a depth map of a scene containing a humanoid form, the depth map comprising a matrix of pixels having respective pixel depth values;extracting from the depth map respective descriptors based on the depth values in a plurality of patches distributed in respective positions over the humanoid form;matching the extracted descriptors to previously-stored descriptors in a database; andestimating a pose of the humanoid form based on stored information associated with the matched descriptors.2. The method according to claim 1 , wherein extracting the respective descriptors comprises dividing each patch into an array of spatial bins claim 1 , and computing a vector of descriptor values corresponding to the pixel depth values in each of the spatial bins.3. The method according to claim 2 , wherein each patch has a center point claim 2 , and wherein the spatial bins that are adjacent to the center point have smaller respective areas than the spatial bins at a periphery of the patch.4. The method according to claim 2 , wherein each patch has a center point claim 2 , and wherein the spatial bins are arranged radially around the center point.5. The method according to claim 2 , wherein the descriptor values are indicative of a distribution of at least one type of depth feature in each bin claim 2 , selected from the group of depth features consisting of depth edges and depth ridges.6. The method according to claim 1 , wherein matching the extracted descriptors comprises ...

Подробнее
13-02-2014 дата публикации

Creating and Viewing Three Dimensional Virtual Slides

Номер: US20140044346A1
Принадлежит: Leica Biosystems Imaging, Inc.

Systems and methods for creating and viewing three dimensional digital slides are provided. One or more microscope slides are positioned in an image acquisition device that scans the specimens on the slides and makes two dimensional images at a medium or high resolution. These two dimensional digital slide images are provided to an image viewing workstation where they are viewed by an operator who pans and zooms the two dimensional image and selects an area of interest for scanning at multiple depth levels (Z-planes). The image acquisition device receives a set of parameters for the multiple depth level scan, including a location and a depth. The image acquisition device then scans the specimen at the location in a series of Z-plane images, where each Z-plane image corresponds to a depth level portion of the specimen within the depth parameter. 1. A method for providing digital images , the method comprising using one or more hardware processors to:provide a base digital image;receive a selection of a first area of interest in the base digital image;acquire a first Z-stack of the first area of interest, wherein the first Z-stack comprises a first plurality of Z-planes, and wherein each of the first plurality of Z-planes comprises a digital image of the first area of interest at a different focus depth; andprovide the first area of interest at one or more focus depths using the first Z-stack.2. The method of claim 1 , wherein providing the first area of interest at one or more focus depths comprises providing at least one of the first plurality of Z-planes.3. The method of claim 1 , wherein providing the first area of interest at one or more focus depths comprises interpolating the first area of interest at one or more of the one or more focus depths based on one or more of the first plurality of Z-planes.4. The method of claim 1 , further comprising receiving one or more parameters claim 1 , and wherein acquiring the first Z-stack of the first area of interest ...

Подробнее
27-02-2014 дата публикации

DEVICE AND METHOD FOR DETECTING A THREE-DIMENSIONAL OBJECT USING A PLURALITY OF CAMERAS

Номер: US20140055573A1
Принадлежит: ETU SYSTEM, LTD.

The present invention relates to a device and method for detecting a three-dimensional object using a plurality of cameras that are capable of simply detecting a three-dimensional object. The device comprises: a planarization unit for planarizing, through homography conversion, each input image obtained by the plurality of cameras; a comparison-area selecting unit for selecting each area to be compared after adjusting the offset of a camera in order to overlay a plurality of images which have been planarized by said planarization unit; a comparison-processing unit for determining whether or not corresponding pixels are identical in the comparison area selected by said comparison-area selecting unit, and generating a single image based on the results of the determination; and an object-detecting unit for detecting a three-dimensional object disposed on the ground by analyzing the form of the single image generated by said comparison-processing unit. 1. A device for detecting a three-dimensional (3D) object using multiple cameras , comprising:a planarization unit for individually planarizing input images acquired by multiple cameras via homography transformation;a comparison region selection unit for calibrating offset of the cameras so that multiple images planarized by the planarization unit are superimposed on each other, and individually selecting regions to be compared;a comparison processing unit for determining whether corresponding pixels in the comparison regions selected by the comparison region selection unit are identical to each other, and generating a single image based on results of the determination; andan object detection unit for analyzing a shape of the single image generated by the comparison processing unit and detecting a 3D object located on a ground.2. The device of claim 1 , wherein the comparison processing unit subtracts pieces of data of the corresponding pixels from each other claim 1 , determines that two pixels are different from each ...

Подробнее
27-02-2014 дата публикации

FACE LOCATION DETECTION

Номер: US20140056510A1
Принадлежит: KONINKLIJKE PHILIPS N.V.

The location of a face is detected from data about a scene. A 3D surface model from is obtained from measurements of the scene. A 2D angle data image is generated from the 3D surface model. The angle data image is generated for a virtual lighting direction, the image representing angles between a ray directions from a virtual light source direction and normal to the 3D surface. A 2D face location algorithm is applied to each of the respective 2D images. In an embodiment respective 2D angle data images for a plurality of virtual lighting directions are generated and face locations detected from the respective 2D images are fused. 1. An image processing method wherein a location of a face is detected , the method comprisingobtaining a 3D surface model from measurements of a scene;{'b': '24', 'generating () a 2D image of angle data from the 3D surface model, the 2D image representing angle data, the angle data for each respective image point in the 2D image being selected dependent on an angle between an incidence direction derived from a virtual lighting direction and a normal to the 3D surface at a point on the 3D surface that is in view in the 2D image at the image point;'}{'b': '25', 'applying () a 2D face location algorithm to the 2D image.'}2. A method according to claim 1 , comprising{'b': '24', 'generating () a plurality of respective 2D images from the 3D surface model, each representing the angle data for a respective virtual lighting direction;'}{'b': '25', 'applying () the 2D face location algorithm to each of the respective 2D images;'}{'b': '27', 'combining () face locations detected from the respective 2D images.'}3. A method according to claim 2 , comprising generating a plurality of said respective 2D images for a same viewing direction for said respective virtual lighting directions.4. A method according to claim 3 , comprising generating respective pluralities of said respective 2D images from claim 3 , each plurality from a different viewing ...

Подробнее
06-03-2014 дата публикации

Method for objectively evaluating quality of stereo image

Номер: US20140064604A1
Принадлежит: Ningbo University

A method for objectively evaluating quality of a stereo image is provided. The method obtains a cyclopean image of a stereo image formed in the human visual system by simulating a process that the human visual system deals with the stereo image. The cyclopean image includes three areas: an occlusion area, a binocular fusion area and a binocular suppression area. Representing characteristics of the image according to the singular value of the image has a strong stability. According characteristics of different areas of the human visual system while dealing with the cyclopean image, the distortion degree of the cyclopean image corresponding to the testing stereo image is presented by the singular value distance between cyclopean images respectively corresponding to the testing stereo image and the reference stereo image, in such a manner that an overall visual quality of the testing stereo image is finally evaluated. 2. The method for objectively evaluating quality of a stereo image claim 1 , as recited in claim 1 , wherein specific process of the step {circle around (2)} comprises:{'sub': l', 'x,y', 'r', 's,t', 'l', 'x,y', 'r', 's,t, 'sup': l', 'r', 'l', 'r, '{circle around (2)}-1 denoting a pixel having a position coordinate of (x,y) in the left-view image Iof the reference stereo image as p, denoting a pixel having a position coordinate of (s,t) in the right-view image Iof the reference stereo image as p, denoting a pixel having a position coordinate of (x,y) in the left-view image Îof the testing stereo image as {circumflex over (p)}, and denoting a pixel having a position coordinate of (s,t) in the right-view image Îof the testing stereo image as {circumflex over (p)}, wherein 1≦x≦W, 1≦y≦H, 1≦s≦W and 1≦t≦H;'}{circle around (2)}-2 processing stereo matching on the reference stereo image, so as to obtain a horizontal disparity and a vertical disparity of each pixel of the reference stereo image, wherein a specific process thereof comprises:{'sub': x,y', 's,t, 'sup ...

Подробнее
06-03-2014 дата публикации

METHOD OF TRANSFORMING STEREOSCOPIC IMAGE AND RECORDING MEDIUM STORING THE SAME

Номер: US20140064608A1
Автор: GIL Jong In, KIM Man Bae

Disclosed is a method of transforming a stereoscopic image, including: extracting a depth map from a left-eye image and a right-eye image of the stereoscopic image as the left-eye image and the right-eye image are input; obtaining transformation information from the depth map; and transforming red, green, and blue (RGB) values of the stereoscopic image based on the transformation information. It is possible to provide a stereoscopic image having an improved three-dimensional effect, compared to an existing stereoscopic image. 1. A method of transforming a stereoscopic image , comprising:extracting a depth map from a left-eye image and a right-eye image of the stereoscopic image as the left-eye image and the right-eye image are input;obtaining transformation information from the depth map; andtransforming red, green, and blue (RGB) values of the stereoscopic image based on the transformation information.2. The method according to claim 1 , wherein the extracting the depth map includes extracting the depth map from the left-eye image and the right-eye image using a stereo matching scheme.3. The method according to claim 2 , wherein the extracting the depth map from the left-eye image and the right-eye image using the stereo matching scheme comprisessearching edges in the left-eye image and the right-eye image to obtain a matching point of each edge,obtaining an edge disparity from the matching point,obtaining a saliency map from RGB images of the left-eye image and the right-eye image,dividing the left-eye image and the right-eye image into predetermined regions using the saliency map,obtaining a disparity of the divided region using the edge disparity, andcorrecting the disparity of the divided region.4. The method according to claim 1 , wherein the transformation information is obtained from a high frequency component of the depth map.6. The method according to claim 1 , wherein the RGB values of the stereoscopic image are transformed using a contrast transformation ...

Подробнее
02-01-2020 дата публикации

SYSTEM AND METHOD FOR COOKING ROBOT

Номер: US20200001463A1
Автор: KIM Jungsik, Kim MinJung
Принадлежит: LG ELECTRONICS INC.

A cooking robot system and a control method thereof is provided. The cooking robot system includes: a robot configured to acquire the image of the object through a sensing unit and generate image data to transmit the image data to a server, or receive a motion of a user with respect to the object from an input unit upon a request of the server and generate demonstration data to transmit the demonstration data to the server, and implement a motion for the object based on motion data corresponding to the image data or the demonstration data; and a server configured to detect the motion data for the object and control the robot by searching for a motion corresponding to the image data via a web server to generate the motion data or by generating the motion data corresponding to the demonstration data. 1. A cooking robot system , which is server-based and recognizes an image of an object to implement a motion , the cooking robot system comprising: acquire the image of the object through a sensing unit and generate image data to transmit the image data to a server, or receive a motion of a user with respect to the object from an input unit upon a request of the server and generate demonstration data to transmit the demonstration data to the server, and', 'implement a motion for the object based on motion data corresponding to the image data or the demonstration data; and, 'a robot configured toa server configured to detect the motion data for the object and control the robot by searching for a motion corresponding to the image data via a web server to generate the motion data or by generating the motion data corresponding to the demonstration data.2. The cooking robot system according to claim 1 , wherein the cooking robot system interworks with an artificial intelligence server and is implemented based on an artificial intelligence to generate the motion data by automatically recognizing the image of the object.3. The cooking robot system according to claim 1 , wherein ...

Подробнее
06-01-2022 дата публикации

THREE DIMENSIONAL IMAGING WITH INTENSITY INFORMATION

Номер: US20220004739A1
Принадлежит:

A method for operating a time-of-flight sensor system includes by an array of pixels of a time-of-flight sensor of the time-of-flight sensor system, generating signal data representative of reflected light from an environment; generating an intensity representation of an object in the environment based on the signal data representative of the reflected light from the environment; determining that the intensity representation indicates that an object in the environment includes a target object; and responsive to the determining, generating a three-dimensional representation of the environment based on the data representative of the reflected light. 1. A method for operating a time-of-flight sensor system , the method comprising:by an array of pixels of a time-of-flight sensor of the time-of-flight sensor system, generating signal data representative of reflected light from an environment;generating an intensity representation of an object in the environment based on the signal data representative of the reflected light from the environment;determining that the intensity representation indicates that an object in the environment comprises a target object; andresponsive to the determining, generating a three-dimensional representation of the environment based on the data representative of the reflected light.2. The method according to claim 1 , wherein generating the three-dimensional representation of the environment comprises demodulating the data representative of the reflected light.3. The method according to claim 1 , wherein the generation of the intensity representation of the environment uses less power than the generation of the three-dimensional representation of the environment.4. The method according to claim 1 , comprising claim 1 , by processing circuitry of the time-of-flight sensor claim 1 , determining that an object is present in the environment.5. The method according to claim 4 , comprising generating the intensity representation responsive to ...

Подробнее
03-01-2019 дата публикации

SHOPPING FACILITY ASSISTANCE SYSTEMS, DEVICES AND METHODS TO ADDRESS GROUND AND WEATHER CONDITIONS

Номер: US20190002256A1
Принадлежит:

Some embodiments provide methods, systems and apparatus to enhance safety. In some embodiments, a system comprises: a central computer system comprising: a transceiver; a control circuit; and a memory coupled to the control circuit and storing computer instructions that when executed by the control circuit cause the control circuit to perform the steps of: communicate positioning routing instructions to the plurality of motorized transport units directing the motorized transport units to one or more external areas of a shopping facility that are exposed to weather conditions; and communicate separate area routing instructions to each of the motorized transport units that when implemented cause the motorized transport units to cooperatively and in concert travel in accordance with the area routing instructions over at least predefined portions of one or more external areas to cause ground treatment systems to address ground level conditions. 1. A system providing enhanced safety , comprising: a transceiver configured to communicate with the motorized transport units located at a shopping facility;', 'a control circuit coupled with the transceiver; and', 'a memory coupled to the control circuit and storing computer instructions that when executed by the control circuit cause the control circuit to perform the steps of:, 'a central computer system that is separate and distinct from a plurality of self-propelled motorized transport units, wherein each of the plurality of motorized transport units is configured to temporarily and interchangeably engage and disengage with anyone of a plurality of different detachable ground treatment systems each configured to perform a different ground treatment to address a different external ground level condition, and wherein the central computer system comprisescommunicate engagement instructions to each of the plurality of motorized transport units to temporarily couple with at least one of the plurality of different detachable ...

Подробнее
06-01-2022 дата публикации

LIGHT-EMITTING DEVICE, OPTICAL DEVICE, AND INFORMATION PROCESSING DEVICE

Номер: US20220006268A1
Принадлежит: FUJIFILM Business Innovation Corp.

A light-emitting device includes: a first light-emitting element array that includes plural first light-emitting elements arranged at a first interval; a second light-emitting element array that includes plural second light-emitting elements arranged at a second interval wider than the first interval, second light-emitting element array being configured to output a light output larger than a light output of the first light-emitting element array, and being configured to be driven independently from the first light-emitting element array; and a light diffusion member provided on an emission path of the second light-emitting element array. 1. A light-emitting device comprising:a first light-emitting element array that includes a plurality of first light-emitting elements arranged at a first interval;a second light-emitting element array that includes a plurality of second light-emitting elements arranged at a second interval wider than the first interval, second light-emitting element array being configured to output a light output larger than a light output of the first light-emitting element array, and being configured to be driven independently from the first light-emitting element array; anda light diffusion member provided on an emission path of the second light-emitting element array.2. The light-emitting device according to claim 1 , whereina spread angle of light emitted from the first light-emitting elements is smaller than a spread angle of light emitted from the second light-emitting elements toward the light diffusion member.3. The light-emitting device according to claim 1 , whereinthe first light-emitting elements include a laser element that emits single mode light.4. The light-emitting device according to claim 2 , whereinthe first light-emitting elements include a laser element that emits single mode light.5. The light-emitting device according to claim 3 , whereinthe first light-emitting elements include a vertical cavity surface emitting laser ...

Подробнее
01-01-2015 дата публикации

3D OBJECT SHAPE AND POSE ESTIMATION AND TRACKING METHOD AND APPARATUS

Номер: US20150003669A1
Принадлежит:

A method and apparatus for estimating and tracking a 3D object shape and pose estimation is disclosed A plurality of 3D object models of related objects varying in size and shape are obtained, aligned and scaled, and voxelized to create a 2D height map of the 3D models to train a principle component analysis model. At least one sensor mounted on a host vehicle obtains a 3D object image. Using the trained principle component analysis model, the processor executes program instructions to estimate the shape and pose of the detected 3D object until the shape and pose of the detected 3D object matches one principle component analysis model. The output of the shape and pose of the detected 3D object is used in one vehicle control function. 1. A method for estimating the shape and pose of a 3D object comprising:detecting a 3D object external to a host using at least one image sensor;using a processor, estimating at least one of the shape and pose of the detected 3D object relative to the host; andproviding an output of the estimated 3D object shape and pose.2. The method of further comprising:obtaining a plurality of 3D object models, where the models are related to a type of object, but differ in shape and size;using a processor, aligning and scaling the 3D object models;voxelizing the aligned and scaled 3D object models;creating a 2D height map of the voxelized 3D object models; andtraining a principle component analysis model for each of the unique shapes of the plurality of 3D object models.3. The method of further comprising:storing the principle component analysis model for 3D object models in a memory coupled to the processor.4. The method of further comprising:for each successive image of the detected 3D object, iterating the estimation of the shape and pose of the detected 3D object until the model of the 3D object matches the shape and pose of the detected 3D object.5. The method of wherein the 3D object is a vehicle and the host is a vehicle.6. The method of ...

Подробнее
01-01-2015 дата публикации

Probability mapping for visualisation and analysis of biomedical images

Номер: US20150003706A1

The invention provides a method of image transformation of a biomedical image, said method comprising: for each voxel of said biomedical image, calculating a transform value indicative of the likelihood of that voxel representing a first tissue type (such as scar tissue); wherein calculating said transform value includes: applying at least one feature function to calculate at least one feature value from the original image voxel value for said voxel and/or from the original image voxel values of one or more voxels proximate to said voxel, said feature function or functions being capable of discriminating between said first tissue and a second tissue type (e.g. healthy tissue); and deriving said transform value from said one or more feature values. The method allows more detail to be extracted from images about the locations of scar and healthy tissue and thus facilitates diagnosis. The transformed image can also be segmented to identify particular areas of interest such as the border zone between scar and healthy tissue where the two tissue types are interwoven. The segmentation technique better identifies clinically relevant information within the image. This information can then be processed to provide an indicator of clinical significance.

Подробнее
13-01-2022 дата публикации

ESTIMATING A CHARACTERISTIC OF A FOOD ITEM

Номер: US20220007689A1
Автор: Baldwin Douglas
Принадлежит:

A computerised device, cooking appliance, cooking system, method and one computer-readable medium are disclosed for estimating one or more characteristics of a food item. In one aspect, the cooking system includes: a computing device configured to: receive image data indicative of indicia on or about a food item; process the image data to determine a location or distortion of the indicia in three-dimensional (3D) space; and determine the one or more characteristics of the food item based at least in part on the location or distortion of the indicia; and a cooking appliance, including: one or more cooking components; and at least one processor configured to: receive the cooking program from the computing device; and control the one or more cooking components according to the cooking program. 1. A computing device for estimating one or more characteristics of a food item , comprising:at least one processor; and receive image data indicative of indicia on or about a food item;', 'process the image data to determine a location or distortion of the indicia in three-dimensional (3D) space; and', 'determine the one or more characteristics of the food item based at least in part on the location or distortion of the indicia., 'at least one non-transitory processor-readable medium storing processor-executable instructions that, when executed by the at least one processor, cause the at least one processor to2. The computing device of wherein the at least one processor is configured to process the image data using a computer-vision algorithm.3. The computing device of or wherein the one or more characteristics of the food item include at least one of a weight claim 1 , a thickness claim 1 , a volume claim 1 , a shape claim 1 , and a surface heat transfer coefficient of the food item.4. The computing device of any one of to claim 1 , wherein the at least one processor is configured to receive one or more user input characteristics of the food item claim 1 , wherein the at least ...

Подробнее
02-01-2020 дата публикации

REFRIGERATOR AND METHOD FOR OPERATING THE REFRIGERATOR

Номер: US20200003482A1
Принадлежит: LG ELECTRONICS INC.

A refrigerator for controlling operations by executing artificial intelligence (AI) algorithms and/or machine learning algorithms in a 5G environment connected for Internet of Things, and a method of operating the refrigerator, are disclosed. A method for operating a refrigerator including performing by a controller, a first recognition of a first food being placed into the refrigerator, performing by the controller, a second recognition of a second food being taken out from the refrigerator, and displaying, on a display of the refrigerator, storage status information on remaining food in the refrigerator based on a result of the first recognition and a result of the second recognition is provided. 1. A method for operating a refrigerator , the method comprising:performing, by a controller, a first recognition of a first food being placed into the refrigerator;performing, by the controller, a second recognition of a second food being taken out from the refrigerator; anddisplaying, on a display of the refrigerator, storage status information on remaining food in the refrigerator based on a result of the first recognition and a result of the second recognition.2. The method of claim 1 , wherein the performing of the first recognition comprises:sensing opening of a door of the refrigerator and activating an image recognition function and a voice recognition function;acquiring a photographed image of the first food;estimating a type of the first food represented by the photographed image by using a first deep neural network which has been trained in advance to determine food types through images;outputting a spoken utterance inducement signal when estimation of the type of the first food fails;receiving a first spoken utterance signal relating to the type of the first food in response to the spoken utterance inducement signal;recognizing the type of the first food represented by the first spoken utterance signal by using a second deep neural network which has been ...

Подробнее
07-01-2021 дата публикации

Method for adjusting length of golay sequence for object recognition and electronic device therefor

Номер: US20210003690A1
Автор: Chiho Kim, Junsu CHOI
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method for adjusting the length of a Golay sequence for object recognition and an electronic device therefor are provided. The method for operating the electronic device includes estimating a predicted distance to an external object, determining, based on the estimated predicted distance, the length of a Golay sequence included in a signal for recognizing the external object, and transmitting at least one signal including a Golay sequence having the determined length, and when a device for wireless communication, included in the electronic device, is utilized to perform a radar function, the length of a Golay sequence is adjusted to enable object recognition as much as a length required according to the use of an application, such that recognition efficiency and data communication efficiency can be optimally provided.

Подробнее
05-01-2017 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20170004385A1
Автор: AOBA Masato
Принадлежит:

In a case where generating a training image of an object to be used to generate a dictionary to be referred to in image recognition processing of detecting the object from an input image, model information of an object to be detected is set, and a luminance image of the object and a range image are input. The luminance distribution of the surface of the object is estimated based on the luminance image and the range image, and the training image of the object is generated based on the model information and the luminance distribution. 119.-. (canceled)20. An image processing apparatus for generating a training image for an object to be used to generate a dictionary to be referred to in image recognition processing of detecting the object from an input image , comprising:a first obtaining unit configured to obtain a luminance image of a plurality of objects and a range image of the object;a determination unit configured to determine, based on the luminance image and the range image, a luminance value for a training image to be generated on the basis of model information of the object;a generation unit configured to generate a training image of the object based on the determined luminance value and the model information,wherein the first obtaining unit, the determining unit, and the generation unit are implemented by using at least one processor.21. The apparatus according to claim 20 , further comprising an estimation unit configured to estimate a relation between a luminance value and a direction of a surface of the object based on the luminance image and the range image claim 20 ,wherein the determination unit determines the luminance value based on the estimated relation.22. The apparatus according to claim 20 , wherein the model information is computer-aided design (CAD) data.23. The apparatus according to claim 22 , wherein the training image is a computer graphics image.24. The apparatus according to claim 20 , wherein the generation unit generates the training ...

Подробнее
13-01-2022 дата публикации

VEHICLE AND METHOD OF MANAGING CLEANLINESS OF INTERIOR OF THE SAME

Номер: US20220011242A1
Принадлежит:

A method of managing cleanliness of an interior of a vehicle includes: detecting indoor contamination using a contamination detector including at least a camera; receiving information on the next user including information on at least one user scheduled to ride in the vehicle; determining at least one harmful substance based on the result of detection of the indoor contamination and the information on the next user; and transmitting information on the determined at least one harmful substance to the external entity. 1. A method of managing cleanliness of an interior of a vehicle , the method comprising:detecting indoor contamination using a contamination detector comprising at least a camera;receiving information on a next user comprising information on at least one user scheduled to ride in the vehicle;determining at least one harmful substance based on a result of detection of the indoor contamination and the information on the next user; andtransmitting information on the at least one harmful substance to an external entity.2. The method according to claim 1 , wherein the determining at least one harmful substance comprises comparing at least one substance included in the result of detection of the indoor contamination with a predetermined harmful substance list.3. The method according to claim 1 , wherein the information on the next user comprises information used to determine a specific harmful substance for each of the at least one user.4. The method according to claim 3 , wherein the information used to determine the specific harmful substance comprises at least one of age claim 3 , occupation claim 3 , gender claim 3 , allergy-causing substances claim 3 , or distasteful substances.5. The method according to claim 3 , wherein the determining at least one harmful substance comprises:determining a candidate harmful substance based on the information used to determine the specific harmful substance; andcomparing the candidate harmful substance with at least one ...

Подробнее
07-01-2021 дата публикации

METHOD AND APPARATUS FOR 3D OBJECT BOUNDING FOR 2D IMAGE DATA

Номер: US20210004566A1
Принадлежит: GM GLOBAL TECHNOLOGY OPERATIONS LLC

Methods and apparatus are provided for 3D object bounding for 2D image data for use in an assisted driving equipped vehicle. In various embodiments, an apparatus includes a camera operative to capture a two dimensional image of a field of view, a lidar operative to generate a point cloud of the field of view, a processor operative to generate a three dimensional representation of the field of view in response to the point cloud, to detect an object within the three dimensional representation, to generate a three dimensional bounding box in response to the object, to project the three dimensional bounding box onto the two dimensional image to generate a labeled two dimensional image, and a vehicle controller to controlling a vehicle in response to the labeled two dimensional image. 1. An apparatus comprising:a camera operative to capture a two dimensional image of a field of view;a lidar operative to generate a point cloud of the field of view;a processor operative to generate a three dimensional representation of the field of view in response to the point cloud, to detect an object within the three dimensional representation, to generate a three dimensional bounding box in response to the object, to project the three dimensional bounding box onto the two dimensional image to generate a labeled two dimensional image; anda vehicle controller operative to control a vehicle in response to the labeled two dimensional image.2. The apparatus of wherein the three dimensional representation of the field of view is a voxelized representation of a three dimensional volume.3. The apparatus of wherein the three dimensional bounding box is representative of a centroid claim 1 , length claim 1 , width and height of the object.4. The apparatus of wherein the processor is further operative to align the image the point cloud in response to an edge detection.5. The apparatus of wherein the processor is further operative to calibrate and co-register a point in the point cloud and a ...

Подробнее
13-01-2022 дата публикации

FISH BIOMASS, SHAPE, AND SIZE DETERMINATION

Номер: US20220012479A1
Принадлежит:

Methods, systems, and apparatuses, including computer programs encoded on a computer-readable storage medium for estimating the shape, size, and mass of fish are described. A pair of stereo cameras may be utilized to obtain right and left images of fish in a defined area. The right and left images may be processed, enhanced, and combined. Object detection may be used to detect and track a fish in images. A pose estimator may be used to determine key points and features of the detected fish. Based on the key points, a three-dimensional (3-D) model of the fish is generated that provides an estimate of the size and shape of the fish. A regression model or neural network model can be applied to the 3-D model to determine a likely weight of the fish. 1. (canceled)2. A computer-implemented method comprising:determining, by a camera system, that an image that was generated by the camera system is of a fish that is at least partially occluded;determining, by the camera system, a respective position of each of one or more key points of the fish that are visible in the image;determining, by the camera system, a respective position of each of one or more key points of the fish that are occluded within the image based at least on the determined respective position of each of the one or more key points of the fish that are visible in the image; andgenerating, by the camera system, a weigh estimate of the fish that is at least partially occluded based on the determined respective position of each of the one or more key points of the fish that are occluded within the image; andproviding, by the camera system, the weight estimate of the fish for output.3. The method of claim 2 , wherein the camera system is an underwater stereo camera system.4. The method of claim 2 , wherein the one or more key points of the fish that are occluded within the image include an eye claim 2 , a nostril claim 2 , or an operculum.5. The method of claim 2 , wherein the respective position of each of the ...

Подробнее
07-01-2021 дата публикации

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF

Номер: US20210004567A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An electronic apparatus includes a processor configured to identify a first distance based on locations of first pixels that received the reflective light, and identify a second distance based on locations of the second pixels that received the reflective light, and calculate a difference between the first and second distances, and based on a distance acquired by the calculation and a moving distance of the electronic apparatus identified through the second sensor, identify whether the reflective light is reflective light reflected by an object or reflective light that was reflected on the object and then reflected again by another object. 1. An electronic apparatus comprising:a light source configured to radiate light;a first sensor configured to receive reflective light based on the light radiated from the light source, the reflective light comprising first reflective light and second reflective light;a second sensor configured to detect a moving distance of the electronic apparatus; and based on the first reflective light corresponding to the light radiated by the light source being received at first pixels among a plurality of pixels included in the first sensor, identify a first distance based on locations of the first pixels in the first sensor that received the first reflective light,', 'based on the second reflective light corresponding to the light radiated by the light source being received at second pixels among the plurality of pixels included in the first sensor, identify a second distance based on locations of the second pixels in the first sensor that received the second reflective light,', 'obtain a difference in distance between the first distance and the second distance, and', 'based on the difference in distance and the moving distance of the electronic apparatus detected by the second sensor, identify whether the reflective light is light reflected by an object or light reflected on the object and subsequently reflected by another surface., 'a ...

Подробнее
13-01-2022 дата публикации

SYSTEM, METHOD AND COMPUTER PROGRAM FOR GUIDED IMAGE CAPTURING OF A MEAL

Номер: US20220012515A1
Принадлежит: Sony Group Corporation

A system including circuitry configured to process multispectral image data of a meal to obtain information on the contents of the meal; and generate, based on the obtained information, a query with guidance to change image capture settings. 1. A system comprising:circuitry configured toprocess image data of a meal to obtain information on the contents of the meal;generate, based on the obtained information, a query with guidance to change image capture settings; andguide a user to pick up at least a part of the meal.2. The system of claim 1 , wherein the circuitry is configured to generate the query with guidance according to insufficient information on the contents of the meal.3. The system of claim 1 , wherein the circuitry is configured to guide the user to change an attitude of a camera to point to other ingredients of the meal.4. The system of claim 1 , wherein the circuitry is configured to guide the user to cut the meal into parts and show a surface profile of the meal towards a camera.5. The system of claim 1 , wherein the circuitry is configured to guide the user to move a camera and to see a particular object in the meal close up.6. The system of claim 1 , wherein the circuitry is configured to generate a recipe of the meal based on the obtained information on the contents of the meal.7. The system of claim 6 , wherein the recipe includes ingredients information claim 6 , nutrition information claim 6 , and/or allergen information.8. The system of claim 6 , wherein the circuitry is configured to change the recipe generation process based on feedback.9. The system of claim 1 , further comprising a sensor arrangement configured to collect the image data of a meal.10. The system of claim 9 , wherein the sensor arrangement is configured to provide depth information.11. The system of claim 9 , wherein the sensor arrangement is configured to provide mass spectrography information.12. The system of wherein the sensor arrangement is configured to provide visible ...

Подробнее
13-01-2022 дата публикации

SYSTEM AND METHOD FOR DETECTING AND TRACKING AN OBJECT

Номер: US20220012517A1

A method includes receiving a first image that is captured at a first time. The method also includes detecting a location of a first object in the first image. The method also includes determining a region of interest based at least partially upon the location of the first object in the first image. The method also includes receiving a second image that is captured at a second time. The method also includes identifying the region of interest in the second image. The method also includes detecting a location of a second object in a portion of the second image that is outside of the region of interest. 1. A method , comprising:identifying a first image that is captured at a first time;detecting a location of a first object in the first image;determining a region of interest based at least partially upon the location of the first object in the first image;identifying a second image that is captured at a second time;identifying the region of interest in the second image; anddetecting a location of a second object in a portion of the second image that is outside of the region of interest.2. The method of claim 1 , wherein the first image is captured by a camera on an aircraft.3. The method of claim 1 , wherein the first object is an aircraft in flight.4. The method of claim 1 , wherein the first object is represented as at least a pixel in the first image.5. The method of claim 1 , wherein the region of interest comprises a plurality of pixels in the first image claim 1 , and wherein the first object is located within the plurality of pixels.6. The method of claim 1 , wherein each pixel in the region of interest has a probability that is greater than a predetermined threshold that the first object will be located therein at the second time claim 1 , wherein each pixel that is outside of the region of interest has a probability that is less than the predetermined threshold that the first object will be located therein at the second time claim 1 , and wherein the second ...

Подробнее
07-01-2021 дата публикации

PEOPLE COUNTING AND TRACKING SYSTEMS AND METHODS

Номер: US20210004606A1
Принадлежит:

Various techniques are provided for counting and/or tracking objects within a field of view of an imaging system, while excluding certain objects from the results. A monitoring system may count or track people identified in captured images while utilizing an employee identification system including a wireless signal receiver to identify and remove the employees from the result. The system includes algorithms for separating employee counts from customer counts, thereby offering enhanced tracking analytics. 1. A system , comprising:a first image sensor operable to capture a stream of images of a first field of view;a wireless signal sensor operable to detect wireless signals emitted from at least one wireless device within an area comprising at least the first field of view; and process the stream of images and detect a plurality of objects within the first field of view;', 'generate a plurality of object tracks, each object track representative of a movement of a detected object within the first field of view;', 'predict, for at least one of the plurality of object tracks, a wireless signal characteristic of a predicted wireless device following the at least one of the object tracks;', 'process the detected wireless signals, including tracking the wireless signal characteristic; and', 'match one of the plurality of object tracks with a detected wireless device based on a fit between the predicted wireless signal characteristic and the tracked wireless signal characteristic., 'a processing system operable to2. The system of further comprising a second image sensor operable to capture images of a second field of view claim 1 , wherein the processing system is further operable to process the captured images to form a 3D image.3. The system of wherein the processing system is further operative to determine a physical location of the detected object based on an object location in the captured images.4. The system of claim 1 , wherein the wireless signal sensor is a ...

Подробнее
07-01-2021 дата публикации

Surround View System Having an Adapted Projection Surface

Номер: US20210004614A1
Принадлежит:

The invention relates to a surround view system () for a vehicle (). The surround view system () comprises a detection unit () and an evaluation unit (). The detection unit () is designed to detect data relating to the surroundings. The evaluation unit () is designed to identify an object () in the detected data relating to the surroundings and to determine the 3D shape of this object. The evaluation unit () is additionally designed to add the determined 3D shape to a projection surface () of the surround view system () for the detected data relating to the surroundings such that an adapted projection surface () results. The evaluation unit () is designed to project the data relating to the surroundings onto the adapted projection surface (). 112. A surround view system () for a vehicle () , comprising:{'b': '20', 'a detection unit (); and'}{'b': '10', 'claim-text': [{'b': '20', 'wherein the detection unit () is designed to detect data relating to the surroundings,'}, {'b': 10', '3, 'wherein the evaluation unit () is designed to B identify an object () in the detected data relating to the surroundings and to determine the 3D shape of this object,'}, {'b': 10', '15', '1', '16, 'wherein the evaluation unit () is additionally designed to add the determined 3D shape to a projection surface () of the surround view system () for the detected data relating to the surroundings such that an adapted projection surface () results, and'}, {'b': 10', '16, 'wherein the evaluation unit () is designed to project the data relating to the surroundings onto the adapted projection surface ().'}], 'an evaluation unit (),'}2120. The surround view system () according to claim 1 , wherein the detection unit () is a camera.313310. The surround view system () according to claim 1 , wherein the 3D shape of the identified object () is predefined and corresponds to the object () identified by the evaluation unit ().4110320. The surround view system () according to claim 1 , wherein the ...

Подробнее
04-01-2018 дата публикации

Sparse simultaneous localization and matching with unified tracking

Номер: US20180005015A1
Автор: Craig Cambias, Xin Hou
Принадлежит: VanGogh Imaging Inc

Described herein are methods and systems for tracking a pose of one or more objects represented in a scene. A sensor captures a plurality of scans of objects in a scene, each scan comprising a color and depth frame. A computing device receives a first one of the scans, determines two-dimensional feature points of the objects using the color and depth frame, and retrieves a key frame from a database that stores key frames of the objects in the scene, each key frame comprising map points. The computing device matches the 2D feature points with the map points, and generates a current pose of the objects in the color and depth frame using the matched 2D feature points. The computing device inserts the color and depth frame into the database as a new key frame, and tracks the pose of the objects in the scene across the scans.

Подробнее
04-01-2018 дата публикации

VIDEO TO DATA

Номер: US20180005037A1

A method and system can generate video content from a video. The method and system can include a coordinator, an image detector, and an object recognizer. The coordinator can be communicatively coupled to a splitter and/or to a plurality of demultiplexer nodes. The splitter can be configured to segment the video. The demultiplexer nodes can be configured to extract audio files from the video and/or to extract still frame images from the video. The image detector can be configured to detect images of objects in the still frame images. The object recognizer can be configured to compare an image of an object to a fractal. The recognizer can be further configured to update the fractal with the image. The coordinator can be configured to embed metadata about the object into the video. 1. A system for generating data from a video , comprising:a coordinator communicatively coupled to a splitter and to a plurality of demultiplexer nodes, wherein the splitter is configured to segment the video, wherein the demultiplexer nodes are configured to extract audio files from the video and to extract still frame images from the video;an image detector configured to detect images of objects in the still frame images;an object recognizer configured to compare an image of an object to a fractal, wherein the recognizer is further configured to update the fractal with the image; andwherein the coordinator is configured to embed metadata about the object into the video.2. The system of claim 1 , wherein the metadata comprises a timestamp and a coordinate location of the object in one or more of the still frame images.3. The system of claim 1 , wherein the coordinator is configured to create additional demultiplexer processing capacity.4. The system of claim 3 , wherein the coordinator is configured to create additional demultiplexer nodes when the demultiplexer nodes reach at least 80% of processing capacity.5. The system of claim 1 , wherein the demultiplexer nodes generate a confidence ...

Подробнее
02-01-2020 дата публикации

RESHAPE-ABLE OLED DEVICE FOR POSITIONING PAYMENT INSTRUMENT

Номер: US20200005000A1
Автор: Cardinal Donald J.
Принадлежит:

Aspects of the disclosure relate to organic light emitting diode (OLED) devices reshape-able to position an article in a predetermined space. The article may be a card. When reshaped, the OLED device may form the predetermined space. The OLED device may include at least one verification sensor positioned relative to the predetermined space. The OLED device may use one or more verification sensors to detect the article that is positioned in the predetermined space. The OLED device may use one or more verification sensors to collect information associated with the article that is in the predetermined space. 2. The OLED device of claim 1 , defining a front surface and a back surface claim 1 , and further comprising a first verification sensor embedded in a section of the front surface claim 1 , and a second verification sensor embedded in a section of the back surface claim 1 , and wherein:the OLED device is configured to be rolled such that the section of the front surface overlaps and faces the section of the back surface, the interstitial space between the facing sections forming the predetermined space;the first verification sensor detects one side of the payment instrument in the predetermined space; andthe second verification sensor detects another side of the payment instrument in the predetermined space.3. The OLED device of claim 1 , wherein at least one verification sensor includes an OLED as a light sensor claim 1 , said OLED as a light sensor including at least one OLED from the array of OLEDs claim 1 , wherein the at least one OLED is toggled between a display mode and a sensing mode.4. The OLED device of claim 1 , wherein the screen includes a flexible material for configuring the screen to be reshape-able.5. The OLED device of claim 1 , further comprising a linear indicator formed thereon claim 1 , the linear indicator for indicating the location of an axis along which to reshape the screen claim 1 , and the predetermined space is defined relative to the ...

Подробнее
02-01-2020 дата публикации

SYSTEM AND METHOD ASSOCIATED WITH PROGRESSIVE SPATIAL ANALYSIS OF PRODIGIOUS 3D DATA INCLUDING COMPLEX STRUCTURES

Номер: US20200005015A1
Принадлежит:

A system associated with progressive spatial analysis of prodigious 3D data including complex structures is disclosed. The system receives minimum boundary information related to a first data object and a second data object, which are proximate neighbors. The system determines whether boundary data associated with a first data object is within an area delineated by minimum boundary information of first data objects. A first geometric structure associated with the first data object is generated based on respective decompressed data. A structural skeleton is determined using the first geometric structure to identify respective skeleton vertices. A geometric representation is generated based on the skeleton vertices associated with the first geometric structure. The system determines whether boundary data associated with the second data object is within the area delineated by the minimum boundary information of the first data object. A centroid point of the second data object that intersects the geometric representation associated with the first object is identified. A location of the centroid point of the second data object with respect to the first data object is determined in order to identify a minimum distance between the first data object and the second data object. 1. A system associated with progressive spatial analysis of prodigious 3D data including complex structures , the system comprising: receiving minimum boundary information related to a first data object;', 'receiving minimum boundary information related to a second data object, the first data object and the second data object being proximate neighbors;', 'determining whether boundary data associated with the first data object is within an area delineated by minimum boundary information of first data objects;', 'generating a first geometric structure associated with the first data object based on respective decompressed data associated with the first data object;', 'determining a structural skeleton ...

Подробнее
02-01-2020 дата публикации

System and Method for Object Matching Using 3D Imaging

Номер: US20200005020A1
Принадлежит:

The system and method utilize three-dimensional (3D) scanning technology to create a database of profiles for good, product, object, or part information by producing object representations that permit rapid, highly-accurate object identification, matching, and obtaining information about the object, which is not afforded by traditional two-dimensional (2D) camera imaging. The profiles can compared to a profile of an unknown object to identify, match, or obtain information about the unknown object, and the profiles can be filtered to identify or match the profiles of known objects to identify and/or gather information about an unknown object. 1. A system for use in identifying a particular object or information about the object , by using 3D image data , the system comprising:a computerized recognition system comprising a computer, a non-contact 3D imaging device, wherein the non-contact 3D imaging device comprises at least one of a structured light scanner, time-of-flight 3D scanner, depth camera, a stereo camera, or a 2D camera combined with the non-contact 3D imaging device and produces 3D point clouds, depth map, or geometric 3D surfaces;means for the non-contact 3D imaging device to create at least one 3D model scan of the particular object, and a means for aggregating data collected from the 3D model scan, the data comprising geometric descriptors and parameters, of the particular object to create a profile of the particular object in a database, wherein the profile comprises a model of the particular object, associated identifying information, geometric descriptors, and parameters for a 3D model of the particular object;means for the non-contact 3D imaging device to communicate with the computer and the database; andmeans for the computer to search the profiles of the 3D imaging scans of models of the particular objects, the computer being programmed to compare the profiles to a 3D imaging scan and associated 3D imaging scan, along with geometric descriptors ...

Подробнее
02-01-2020 дата публикации

METHOD, DEVICE AND SYSTEM FOR PROCESSING IMAGE TAGGING INFORMATION

Номер: US20200005073A1
Автор: KE Haifan
Принадлежит:

The present disclosure provides a method, a device and a system for processing image tagging information. The method includes the following. A captured image and a capturing position are acquired from a terminal. Object recognition is applied to the captured image to determine a first object presented by the captured image. A first target image presenting the first object is searched for among one or more historical images associated with the capturing position. When the first target image is searched for, it is determined that the first object is duplicated with tagging information of an electronic map. When the first target image is not searched for, a corresponding position of the first object on the electronic map is tagged. 1. A method for processing image tagging information , comprising:acquiring a captured image and a capturing position from a terminal;applying object recognition to the captured image, to determine a first object presented by the captured image;searching for a first target image presenting the first object among one or more historical images associated to the capturing position;determining that the first object is duplicated with tagging information on an electronic map in response to that the first target image is searched for; andtagging a corresponding position of the first object on the electronic map according to the capturing position in response to that the first target image is not searched for.2. The method of claim 1 , wherein claim 1 , after the first target image is searched for claim 1 , the method further comprises:extracting a regional image presenting the first object from the captured image; andwhen there is a difference between the regional image presenting the first object and the first target image, replacing the first target image with the regional image presenting the first object.3. The method of claim 1 , wherein claim 1 , after the first target image is not searched for claim 1 , the method further comprises: ...

Подробнее
07-01-2021 дата публикации

METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION

Номер: US20210004942A1
Автор: Zhang Long, Zhou Wei, Zhou Wen
Принадлежит: ArcSoft Corporation Limited

The disclosure provides a method and device for three-dimensional reconstruction, applied to the field of image processing. The method includes: obtaining a first depth map, which is photographed by a first photographic device, and obtaining a second depth map, which is photographed by a second photographic device; merging the first depth map with a first three-dimensional model according to a position of the first photographic device to obtain a second three-dimensional model; and merging the second depth map with the second three-dimensional model according to a position of the second photographic device to obtain a third three-dimensional model. 1. A method for measurement , comprising:obtaining a three-dimensional model of a measured object;fitting a pre-stored measured three-dimensional model to the three-dimensional model of the measured object; andmeasuring the three-dimensional model of the measured object according to the pre-stored measured three-dimensional model and the fitting.2. The method according to claim 1 , wherein:the pre-stored measured three-dimensional model comprises feature measurement markers; andmeasuring the three-dimensional model of the measured object according to the pre-stored measured three-dimensional model and the fitting process comprises measuring the three-dimensional model of the measured object according to the feature measurement markers and the fitting.3. The method according to claim 2 , wherein:the measured object is a human body;the feature measurement markers are marking points of the pre-stored measured three-dimensional model;one or more feature measurement markers are located on a body circumference of the pre-stored measured three-dimensional model; and calculating fitted heights of the one or more feature measurement markers after the fitting according to heights of the feature measurement markers on the pre-stored measured three-dimensional model and the fitting;', 'obtaining an envelope curve located on the ...

Подробнее
03-01-2019 дата публикации

METHOD AND SYSTEM FOR GENERATING A CONTEXTUAL AUDIO RELATED TO AN IMAGE

Номер: US20190005128A1
Принадлежит:

Disclosed subject matter relates to digital media including a method and system for generating a contextual audio related to an image. An audio generating system may determine scene-theme and viewer theme of scene in the image. Further, audio files matching scene-objects and the contextual data may be retrieved in real-time and relevant audio files from audio files may be identified based on relationship between scene-theme, scene-objects, viewer theme, contextual data and metadata of audio files. A contribution weightage may be assigned to the relevant and substitute audio files based on contextual data and may be correlated based on contribution weightage, thereby generating the contextual audio related to the image. The present disclosure provides a feature wherein the contextual audio generated for an image may provide a holistic audio effect in accordance with context of the image, thus recreating the audio that might have been present when the image was captured. 1. A method of generating a contextual audio related to an image , the method comprising:determining, by an audio generating system, a scene-theme of a scene in the image by analyzing the image received from an image repository, wherein the scene-theme is determined based on key image features in the image and one or more scene-objects, corresponding to the key image features, in the image;determining, by the audio generating system, a viewer theme of the image based on contextual data associated with the image;retrieving, by the audio generating system, one or more audio files matching the one or more scene-objects and the contextual data in real-time by performing a real-time search using textual descriptions of the one or more scene-objects and the contextual data;identifying, by the audio generating system, one or more relevant audio files from the one or more audio files based on relationship between the scene-theme, the one or more scene-objects, the viewer theme, the contextual data and ...

Подробнее
02-01-2020 дата публикации

Computer aided rebar measurement and inspection system

Номер: US20200005447A1
Принадлежит: Obayashi Corp, SRI International Inc

Embodiments of the present invention generally relate to computer aided rebar measurement and inspection systems. In some embodiments, the system may include a data acquisition system configured to obtain fine-level rebar measurements, images or videos of rebar structures, a 3D point cloud model generation system configured to generate a 3D point cloud model representation of the rebar structure from information acquired by the data acquisition system, a rebar detection system configured to detect rebar within the 3D point cloud model generated or the rebar images or videos of the rebar structures, a rebar measurement system to measure features of the rebar and rebar structures detected by the rebar detection system, and a discrepancy detection system configured to compare the measured features of the rebar structures detected by the rebar detection system with a 3D Building Information Model (BIM) of the rebar structures, and determine any discrepancies between them.

Подробнее
02-01-2020 дата публикации

THREE-DIMENSIONAL BOUNDING BOX FROM TWO-DIMENSIONAL IMAGE AND POINT CLOUD DATA

Номер: US20200005485A1
Принадлежит:

A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis. 1. (canceled)2. A computer-implemented method comprising:receiving sensor data comprising a plurality of measurements of an environment;inputting at least a portion of the sensor data into a machine learned model;determining, as a first feature vector and based at least in part on a first portion of the machine learned model, a first set of values associated with a measurement of the plurality of measurements;determining, as a second feature vector and based at least in part on a second portion of the machine learned model, a second set of values associated with the plurality of measurements;combining, as a combined feature vector, the first feature vector and the second feature vector;inputting the combined feature vector into a third portion of the machine learned model; andreceiving, from the third portion of the machine learned model, information associated with an object represented in the sensor data.3. The computer-implemented method of claim 2 , further comprising:receiving, from an image sensor, image data of the environment;determining a portion of the image data associated with the object;determining, based at least in part on the portion of the image data associated with the object, a subset of the sensor data associated with the portion of the image data;inputting the portion of the image data into a fourth portion of the machine learned model;receiving, from the fourth portion of the machine learned model, an appearance feature vector; andinputting the appearance feature vector ...

Подробнее
03-01-2019 дата публикации

Hierarchical Data Organization for Dense Optical Flow Processing in a Computer Vision System

Номер: US20190005335A1
Принадлежит:

A computer vision system is provided that includes an image generation device configured to capture consecutive two dimensional (2D) images of a scene, a first memory configured to store the consecutive 2D images, a second memory configured to store a growing window of consecutive rows of a reference image and a growing window of consecutive rows of a current image, wherein the reference image and the current image are a pair of consecutive 2D images stored in the first memory, a third memory configured to store a sliding window of pixels fetched from the growing window of the reference image, wherein the pixels in the sliding window are stored in tiles, and a dense optical flow engine (DOFE) configured to determine a dense optical flow map for the pair of consecutive 2D images, wherein the DOFE uses the sliding window as a search window for pixel correspondence searches. 1. A computer vision system comprising:an image generation device configured to capture consecutive two dimensional (2D) images of a scene;a first memory configured to store the consecutive 2D images;a second memory configured to store a growing window of consecutive rows of a reference image fetched from the first memory and a growing window of consecutive rows of a current image fetched from the first memory, wherein the reference image and the current image are a pair of consecutive 2D images;a third memory configured to store a sliding window of pixels fetched from the growing window of consecutive rows of the reference image, wherein the pixels in the sliding window are stored in tiles; anda dense optical flow engine (DOFE) configured to determine a dense optical flow map for the pair of consecutive 2D images, wherein the DOFE uses the sliding window as a search window for pixel correspondence searches.2. The computer vision system of claim 1 , wherein each tile is a 4×4 block of pixels.3. The computer vision system of claim 1 , wherein an entire tile can be accessed in a single cycle.4. The ...

Подробнее
02-01-2020 дата публикации

Systems and Methods for Authenticating a User According to a Hand of the User Moving in a Three-Dimensional (3D) Space

Номер: US20200005530A1
Автор: Holz David
Принадлежит: Ultrahaptics IP Two Limited

Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on edge points thereof. 1. A system for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space , the system comprising: analyzing a sequence of images including the hand of the user moving in the 3D space, as captured by a camera from a particular vantage point, to (i) computationally determine a shape of the hand of the user according to one or more mathematically represented 3D surfaces of the hand and (ii) computationally determine a jitter pattern of the hand; and', 'in response to a received authentication determination obtained by performing a comparison of the shape of the hand and the jitter pattern of the hand to a database of hand shapes and jitter patterns, authenticating the user and granting access to the user when the authentication determination indicates that the user is authorized and denying access to the user when the authentication determination indicates that the user is not authorized., 'one or more processors coupled to a memory storing instructions that, when executed by the one or more processors, implement actions including2. The system of claim 1 , further including: at least one source that casts an output onto a portion of the hand of the user.3. The system of claim 1 , further including transmitting to at least one further process claim 1 , a signal that includes at least one selected from (i) trajectory information determined from a reconstructed position of a portion of the hand of the user that the at least one further process interprets claim 1 , and (ii) gesture information interpreted from trajectory information for the portion of the hand of the user.4. The system of claim 1 , further comprising a time-of-flight camera claim 1 , and wherein a plurality of ...

Подробнее
02-01-2020 дата публикации

Computer Vision Systems and Methods for Modeling Three-Dimensional Structures Using Two-Dimensional Segments Detected in Digital Aerial images

Номер: US20200005536A1
Принадлежит: Geomni, Inc.

A system for modeling a three-dimensional structure utilizing two-dimensional segments comprising a memory and a processor in communication with the memory. The processor extracts a plurality of two-dimensional segments corresponding to the three-dimensional structure from a plurality of images indicative of different views of the three-dimensional structure. The processor determines a plurality of three-dimensional candidate segments based on the extracted plurality of two-dimensional segments and adds the plurality of three-dimensional candidate segments to a three-dimensional segment cloud. The processor transforms the three-dimensional segment cloud into a wireframe indicative of the three-dimensional structure by performing a wireframe extraction process on the three-dimensional segment cloud. 1. A system for modeling a three-dimensional structure utilizing two-dimensional segments comprising:a memory; and extracting a plurality of two-dimensional segments corresponding to the three-dimensional structure from a plurality of images indicative of different views of the three-dimensional structure;', 'determining a plurality of three-dimensional candidate segments based on the extracted plurality of two-dimensional segments;', 'adding the plurality of three-dimensional candidate segments to a three-dimensional segment cloud; and', 'transforming the three-dimensional segment cloud into a wireframe indicative of the three-dimensional structure by performing a wireframe extraction process on the three-dimensional segment cloud., 'a processor in communication with the memory, the processor2. The system of claim 1 , wherein the processor:captures the plurality of images from different camera viewpoints;determines a projection plane, camera parameter sets, and image parameters associated with each image of the plurality of images; andidentifies, based on the projection plane, the camera parameter sets, and the image parameters, two-dimensional segments sets in each ...

Подробнее
20-01-2022 дата публикации

TOOL-PICKUP SYSTEM, METHOD, COMPUTER PROGRAM AND NON-VOLATILE DATA CARRIER

Номер: US20220015326A1
Автор: Eriksson Andreas
Принадлежит:

Tools in an automatic milking arrangement are picked up by using a robotic arm (). The robotic arm () moves a camera () to an origin location (PC) from which the camera () registers three-dimensional image data (DimgD) of at least one tool (). The three-dimensional image data is processed using an image-based object identification algorithm to identify objects in the form of the tools and hoses (). In response to identifying at least one tool, a respective tool position (PT, PT, PT) is determined for each identified tool based on the origin location (PC) and the three-dimensional image data. Then, a grip device () is exclusively controlled to the one or more of the respective tool positions (PT, PT, PT) to perform a pick-up operation. Thus, futile attempts to pick-up non-existing or blocked tools can be avoided. 1. A tool-pickup system for an automatic milking arrangement , the tool-pickup system comprising:{'b': 110', '115', '141', '142', '143', '144', '130', '3, 'a robotic arm () provided with a grip device () configured to pick up tools (, , , ), and a camera () configured to register three-dimensional image data (DimgD); and'}{'b': 120', '120, 'a control unit () operatively connected to the robotic arm, the control unit () configured to{'b': 110', '130', '141', '142', '143', '144', '130, 'control the robotic arm () to move the camera () to an origin location (PC) from which at least one tool of the tools (, , , ) is expected to be visible within a view field (VF) of the camera (),'}{'b': 3', '130, 'obtain three-dimensional image data (DimgD) registered by the camera () at the origin location (PC),'}{'b': 3', '141', '143', '144', '152, 'process the three-dimensional image data (DimgD) using an image-based object identification algorithm to identify objects in a form of the tools (, , ) and/or hoses (), and'}{'b': 141', '143', '144, 'in response to identifying at least one of the tools (, , ){'b': 1', '3', '4', '141', '143', '144', '3, 'i) determine a respective ...

Подробнее
05-01-2017 дата публикации

WIDE ANGLE IMAGING SYSTEM FOR PROVIDING AN IMAGE OF THE SURROUNDINGS OF A VEHICLE, IN PARTICULAR A MOTOR VEHICLE

Номер: US20170006276A1
Принадлежит:

An imaging system includes a digital camera having a sensor (such as a charge coupled device), a first lens directing a first image onto a first region of the sensor, a second lens directing a second image onto a second region of the sensor, and a third lens directing a third image onto a third region of the sensor. A display screen displays to a driver of the vehicle the first image, and a processing unit performs stereoscopic image analysis on data originating from the second and third regions. A fourth lens may be used to direct a fourth image onto a fourth region of the sensor, and the processing unit performs calculations on data from the fourth region for the detection of movement of the vehicle. 1. An imaging system comprising:an electro-optical sensor;a first lens directing an image onto the sensor;a display screen displaying the image;second and third inter-axially spaced lenses directing respective second and third images onto the sensor on opposite sides of the image, the second and third lenses producing purposeful width distortion of the respective images adapted for stereoscopic analysis; anda processing unit performing stereoscopic analysis on the second and third images.2. The imaging system of claim 1 , further comprising a fourth lens inter-axially spaced from the first claim 1 , the second claim 1 , and the third lenses to direct a fourth image onto a fourth region of the sensor located adjacent a third edge of the sensor claim 1 , the fourth image exhibiting purposeful height distortion.3. The imaging system of claim 2 , wherein the processing unit performs calculations on data from the fourth region to detect lateral movement of a host vehicle.4. The imaging system of claim 3 , wherein the processing unit detects lateral movement of the vehicle by analyzing lane markings appearing in the fourth image.5. The imaging system of claim 1 , wherein at least one of the lenses comprises at least two mirrors optically aligned with one another.6. The ...

Подробнее
04-01-2018 дата публикации

TECHNOLOGIES FOR AUTOMATED PROJECTOR PLACEMENT FOR PROJECTED COMPUTING INTERACTIONS

Номер: US20180007341A1
Автор: Okuley James M.
Принадлежит:

Technologies for automated optimal projector placement include a computing device having a depth camera and a projector. The computing device scans an environment of a user of the computing device with the depth camera to generate an environment map and determines a projection surface for a projected computing interaction based on the environment map and a usability factor. The usability factor may include application requirements, ergonomic factors such as viewing angle or reach distance, surface visibility features, or other factors. The computing device determines a target location for the projector based on the projection surface and presents the target location to the user. The target location may be determined to avoid obstructions or based on a projected image feature size or quality of the projected computing interaction. The computing device may project an indication of the target location at the target location. Other embodiments are described and claimed. 1. A computing device for projector positioning , the computing device comprising:a depth camera;a projector;an environment module to scan an environment of a user of the computing device with the depth camera to generate an environment map;an optimization module to (i) determine a projection surface for a projected computing interaction based on the environment map and a usability factor of the projected computing interaction, and (ii) determine a target location for the projector based on the projection surface and the environment map; andan output module to present the target location to the user of the computing device.2. The computing device of claim 1 , wherein the depth camera comprises a time-of-flight depth camera.3. The computing device of claim 1 , wherein to scan the environment comprises to identify the user of the computing device claim 1 , environmental objects claim 1 , and environmental surfaces in the environment of the user.4. The computing device of claim 1 , wherein the usability ...

Подробнее
20-01-2022 дата публикации

Gesture Recognition Using Multiple Antenna

Номер: US20220019291A1
Принадлежит: Google LLC

Various embodiments wirelessly detect micro gestures using multiple antenna of a gesture sensor device. At times, the gesture sensor device transmits multiple outgoing radio frequency (RF) signals, each outgoing RF signal transmitted via a respective antenna of the gesture sensor device. The outgoing RF signals are configured to help capture information that can be used to identify micro-gestures performed by a hand. The gesture sensor device captures incoming RF signals generated by the outgoing RF signals reflecting off of the hand, and then analyzes the incoming RF signals to identify the micro-gesture.

Подробнее
08-01-2015 дата публикации

THREE-DIMENSIONAL OBJECT DETECTION DEVICE

Номер: US20150009292A1
Принадлежит: NISSAN MOTOR CO., LTD

A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit and a light source detection unit. The image conversion unit converts a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the adjacent lane. The three-dimensional object detection unit determines the presence of the three-dimensional object within the adjacent lane-when the difference waveform information is at a threshold value or higher. The three-dimensional object detection unit set a threshold value lower so that the three-dimensional object is more readily detected in a rearward area than forward area with respect to a line connecting the light source and the image capturing unit. 1. A three-dimensional object detection device comprising:an image capturing unit arranged to capture images of a predetermined area relative to an adjacent lane rearward of a host vehicle equipped with the three-dimensional object detection device;an image conversion unit programmed to convert a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images;a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the adjacent lane in which the bird's-eye view images obtained at different times by the image conversion unit are aligned, and based on difference waveform information is generated by counting and creating a frequency distribution of a number of pixels that indicate a difference having a predetermined first threshold value or higher in a difference image of the bird's-eye view images that were aligned to detect the presence of the three-dimensional object within the adjacent lane upon determining the difference waveform information is at a predetermined second threshold value or higher; anda light source ...

Подробнее
03-01-2019 дата публикации

MAPPING AND TRACKING SYSTEM WITH FEATURES IN THREE-DIMENSIONAL SPACE

Номер: US20190007673A1
Автор: Karvounis John George
Принадлежит:

LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps. 1. A system for creating a map of a high-dimensional area and locating the system on the map , the system comprising:a processor; and create a state associated with stereo images of the area to be mapped;', 'process a first stereo image to identify a visual feature within the area, wherein the first stereo image is associated with a first pose of the system;', 'process a second stereo image to determine whether the visual feature is absent from the area in the second stereo image, wherein the second stereo image is associated with a second pose of the system;', 'remove, based on the determination, the first pose and the visual feature from the state and send the first pose and the visual feature to a mapping module configured to maintain a structure that correlates poses and visual features; and', 'create a map of the area and at least one location of the system on the map., 'a memory communicatively coupled to the processor when the system is operational, the memory bearing ...

Подробнее
02-01-2020 дата публикации

METHODS FOR AUTOMATIC REGISTRATION OF 3D IMAGE DATA

Номер: US20200007842A1
Принадлежит:

A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera. 1. A method comprising:capturing first 2D image data with a red-green-blue (RGB) camera of a 3D image capture system at a first time;capturing first depth data with the depth camera of the 3D image capture system at the first time;determining a first pose associated with the 3D image capture system at the first time;performing registration of the RGB camera to the depth camera based at least in part on the first pose;capturing second 2D image data with the RGB camera at a second time;capturing second depth data at the second time;determining a second pose associated with the 3D image capture system at the second time; andcorrecting the registration of the RGB camera to the depth camera based at least in part on the second pose.2. The method of claim 1 , wherein correcting the registration comprises:identifying a set of points common to the first depth-and the second depth data;calculating a color/texture error function for the set of points;updating the registration to reduce the color/texture error function; andwherein identifying the set of points common to the first depth data and the second depth data comprises identifying the set of points using an iterative closest point algorithm.3. The method of claim 1 , further comprising:identifying a set of points common to the first depth-and the second depth data;calculating a color/texture error function for the set of points;updating the registration to reduce the color/texture error function; andtracking ...

Подробнее
12-01-2017 дата публикации

ATTITUDE ESTIMATION METHOD AND SYSTEM FOR ON-ORBIT THREE-DIMENSIONAL SPACE OBJECT UNDER MODEL RESTRAINT

Номер: US20170008650A1
Принадлежит:

An attitude estimation method for an on-orbit three-dimensional space object comprises an offline feature library construction step and an online attitude estimation step. The offline feature library construction step comprises: according to a space object three-dimensional model, acquiring multi-viewpoint characteristic views of the object, and extracting geometrical features therefrom to form a geometrical feature library, where the geometrical features comprise an object main body height-width ratio, an object longitudinal symmetry, an object horizontal symmetry, and an object main-axis inclination angle. The online attitude estimation step comprises: preprocessing an on-orbit object image to be tested and extracting features, and matching the extracted features in the geometrical feature library, where an object attitude characterized by a characteristic view corresponding to a matching result is an attitude estimation result. A dimension scale and position relationship between various components of an object are accurately acquired in a three-dimensional modeling stage, thereby ensuring subsequent relatively high matching precision. An attitude estimation system for an on-orbit three-dimensional space object is also provided. 1. An attitude estimation method for an on-orbit three-dimensional space object , comprising an offline feature library construction step and an online attitude estimation step , whereinthe offline feature library construction step specifically comprises:(A1) acquiring, according to a space object three-dimensional model, multi-viewpoint characteristic views of the object for characterizing various attitudes of the space object; and{'sub': i,1', 'i,2', 'i,3', 'i,4', 'i,1', 'i,2', 'i,3', 'i,4, '(A2) extracting geometrical features from each space object multi-viewpoint characteristic view to form a geometrical feature library, wherein the geometrical features comprise an object main body height-width ratio T, an object longitudinal symmetry ...

Подробнее
08-01-2015 дата публикации

Method for Determining Object Poses Using Weighted Features

Номер: US20150010202A1
Принадлежит:

A method for determining a pose of an object in a scene by determining a set of scene features from data acquired of the scene and matching the scene features to model features to generate weighted candiate poses when the scene feature matches one of the model features, wherein the weight of the candidate pose is proportional to the model weight. Then, the pose of the object is determined from the candidate poses based on the weights. 1. A method for determining a pose of an object in a scene , comprising the steps of:determining, from a model of the object, model features and a weight associated with each model feature;determining, from scene data acquired of the scene, scene features;matching the scene features to the model features to obtain a matching scene and matching model features;generating candidate poses from the matching scene and the matching model features, wherein a weight of each candidate pose is proportinal to the weight associated with the matching model feature; anddetermining the pose of the object from the candidate poses based on the weights.2. The method of claim 1 , wherein the model features and the weights are learned using training data by maximizing a difference between a number of votes received by a true pose and a number of votes received by an incorrect pose.3. The method of claim 1 , wherein a descriptor is determined for each feature and the matching uses a distance function of two descriptors.4. The method of claim 1 , wherein the features are oriented point pair features.5. The method of claim 1 , wherein the pose is determined by clustering the candidate poses.6. The method of claim 5 , wherein the clustering merges two candidate poses by taking a weighted stun of the candidate poses according to the weights associated with the candidate poses.8. The method of claim 1 , wherein the scene data are a 3D point cloud.9. The method of claim 1 , wherein the model features are stored in a hash table claim 1 , the scene features are ...

Подробнее
08-01-2015 дата публикации

Image matching method and stereo matching system

Номер: US20150010230A1
Принадлежит: NOVATEK MICROELECTRONICS CORP.

An image matching method is utilized for performing a stereo matching from a first image block to a second image block in a stereo matching system. The image matching method includes performing a matching computation from the first image block to the second image block according to a first matching algorithm to generate a first matching result; performing the matching computation between the first image block and the second image block according to a second matching algorithm to generate a second matching result and a third matching result; obtaining a matching error and a matching similarity of the first image block according to the second matching result and the third matching result; and determining a stereo matching result of the first image block according to the matching error and the matching similarity. 1. An image matching method , for performing a stereo matching from a first image block to a second image block in a stereo matching system , the image matching method comprising:performing a matching computation from the first image block to the second image block according to a first matching algorithm to generate a first matching result;performing the matching computation from the first image block to the second image block according to a second matching algorithm to generate a second matching result;performing the matching computation from the second image block to the first image block according to the second matching algorithm to generate a third matching result;obtaining a matching error of the first image block corresponding to the second matching block according to the second matching result and the third matching result, and obtaining a matching similarity of the first image block matched to the second image block according to the second matching result; anddetermining a stereo matching result of the first image block as the first matching result or the second matching result according to the matching error and the matching similarity.2. The image ...

Подробнее
27-01-2022 дата публикации

DRIVABLE SURFACE IDENTIFICATION TECHNIQUES

Номер: US20220024485A1
Принадлежит: SafeAI, Inc.

The present disclosure relates generally to identification of drivable surfaces in connection with autonomously performing various tasks at industrial work sites and, more particularly, to techniques for distinguishing drivable surfaces from non-drivable surfaces based on sensor data. A framework for the identification of drivable surfaces is provided for an autonomous machine to facilitate it to autonomously detect the presence of a drivable surface and to estimate, based on sensor data, attributes of the drivable surface such as road condition, road curvature, degree of inclination or declination, and the like. In certain embodiments, at least one camera image is processed to extract a set features from which surfaces and objects in a physical environment are identified, and to generate additional images for further processing. The additional images are combined with a 3D representation, derived from LIDAR or radar data, to generate an output representation indicating a drivable surface. 1. A method comprising:receiving, by a controller system of an autonomous vehicle, sensor data from a plurality of sensors, the sensor data comprising at least one camera image of a physical environment and a first three-dimensional (3D) representation of the physical environment;extracting, by the controller system, a set of features from the at least one camera image, the extracting comprising inputting the at least one camera image to a neural network trained to infer values of the set of features from image data;estimating, by the controller system and using the values of the set of features, depths of different locations in the physical environment;generating, by the controller system, a depth image based on the estimated depths;identifying, by the controller system and using the values of the set of features, boundaries of surfaces in the physical environment;generating, by the controller system, a segmented image, the segmented image being divided into different regions, ...

Подробнее
14-01-2021 дата публикации

RECHARGING APPARATUS AND METHOD

Номер: US20210009391A1
Принадлежит:

Methods and apparatuses are provided for use in monitor power levels at a shopping facility, comprising: central control system separate and distinct from a plurality of self-propelled motorized transport units, wherein the central control system comprises: a transceiver configured to wirelessly receive communications from the plurality of motorized transport units; a control circuit coupled with the transceiver; and a memory coupled to the control circuit and storing computer instructions that cause the control circuit to: identify available stored power levels at each of the plurality of motorized transport units; identify an available recharge station, of a plurality of recharge stations distributed throughout the shopping facility, at least relative to a location of the first motorized transport unit intended to be subjected to recharging; and wirelessly communicate one or more instructions to cause the first motorized transport unit to cooperate with an available recharge station. 1. A system that monitors motorized vehicles operating at a shopping facility , comprising:a transport unit central control system separate and distinct from a plurality of motorized transport units at a shopping facility, wherein each of the plurality of motorized transport units is self-propelled and wherein the transport unit central control system comprises:a transceiver configured to wirelessly receive communications from the plurality of motorized transport units located at the shopping facility;a control circuit coupled with the transceiver; and identify that a first motorized transport unit, of the plurality of motorized transport units, is unable to effectively move itself;', 'determine, based on the determination that the first motorized transport unit is unable to effectively move itself, a location of the first motorized transport unit within the shopping facility;', 'identify a first recharge station, of a plurality of recharge stations at the shopping facility; and', ' ...

Подробнее
27-01-2022 дата публикации

Automated training data collection for object detection

Номер: US20220027599A1
Принадлежит: International Business Machines Corp

A method, system, and computer program product for automated collection of training data and training object detection models is provided. The method generates a set of reference images for a first set of products. Based on the set of reference images, the method identifies a subset of products within an image stream. Based on the subset of products, a second set of products is determined within the image stream. The method identifies a set of product gaps based on the subset of products and the second set of products. The method generates a product detection model based on the set of reference images, the subset of products, the second set of products, and the product gaps.

Подробнее
27-01-2022 дата публикации

ELECTRONIC APPARATUS FOR OBJECT RECOGNITION AND CONTROL METHOD THEREOF

Номер: US20220027600A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An electronic apparatus is disclosed. The electronic apparatus includes a sensor, a camera, a memory, a camera and a processor. The memory stores a plurality of artificial intelligence models trained to identify objects and stores information on a map. The first processor provides, to the second processor, area information on an area in which the electronic apparatus is determined, based on sensing data obtained from the sensor, to be located, from among a plurality of areas included in the map. The second processor loads at least one artificial intelligence model of the plurality of artificial intelligence models to the volatile memory based on the area information and identifies an object by inputting the image obtained through the camera to the loaded artificial intelligence model. 1. An electronic apparatus , comprising:a camera;a storage for storing a plurality of artificial intelligence models trained to identify objects and for storing information on a map; and identify an artificial intelligence model, from among the plurality of artificial intelligence models stored in the storage based on information on an area in which the electronic apparatus is located from among a plurality of areas in the map, and', 'input an image obtained through the camera to the identified artificial intelligence model to identify an object., 'at least one processor configured to2. The electronic apparatus of claim 1 , wherein:each of the plurality of artificial intelligence models comprises a first layer and a second layer trained to identify an object based on characteristic information extracted from the first layer;the first layer is a common layer in the plurality of artificial intelligence models; andthe processor is further configured to identify the object using the first layer and the second layer of the artificial intelligence model corresponding to the area.3. The electronic apparatus of claim 2 , wherein:the plurality of artificial intelligence models comprises a first ...

Подробнее
27-01-2022 дата публикации

POINT CLOUD DATA PROCESSING APPARATUS, POINT CLOUD DATA PROCESSING METHOD, AND PROGRAM

Номер: US20220027654A1
Автор: IWAMI Kazuchika
Принадлежит: FUJIFILM Corporation

A point cloud data processing apparatus includes: an image data acquisition unit that acquires image data of an object; a point cloud data acquisition unit that acquires point cloud data; a recognition unit that recognizes the object on the basis of the image data, and acquires a region of the object and attribute information for identifying the object; and an attribute assigning unit that selects, from the point cloud data, point cloud data that belongs to the region of the object, and assigns the identified attribute information to the selected point cloud data. 1. A point cloud data processing apparatus comprising:a processor that is configured toacquire image data of an object,acquire point cloud data having a corresponding positional relationship with the image data and representing pieces of three-dimensional information of a large number of points on an outer surface of the object,recognize the object on the basis of the image data, and acquire a region of the object and attribute information for identifying the object, andselect, from the point cloud data, point cloud data that belongs to the region of the object, and assign the acquired attribute information to the selected point cloud data.2. The point cloud data processing apparatus according to claim 1 , wherein the processor is configured toacquire, in a case where the image data includes pieces of image data of a plurality of objects, a region of each object among the objects and attribute information for identifying the object on a per object basis, andselect, from the point cloud data, point cloud data that belongs to the region of each object among the objects on a per object basis, and assign the attribute information of the object to the point cloud data selected on a per object basis.3. The point cloud data processing apparatus according to claim 1 , wherein the processor is configured torecognize, in a case where the image data includes image data of the object having a plurality of partial ...

Подробнее
27-01-2022 дата публикации

FEATURE QUANTITY CALCULATING METHOD, FEATURE QUANTITY CALCULATING PROGRAM, FEATURE QUANTITY CALCULATING DEVICE, SCREENING METHOD, SCREENING PROGRAM, AND COMPOUND CREATING METHOD

Номер: US20220028499A1
Принадлежит: FUJIFILM Corporation

An object of the present invention is to provide a method, a program, and a device which enable calculation of a feature quantity accurately indicating chemical properties of a target structure. Further, another object of the present invention is to provide a method and a program which enable efficient screening of a pharmaceutical candidate compound using a feature quantity. Further, still another object of the present invention is to provide a method which enables efficient creation of a three-dimensional structure of a pharmaceutical candidate compound using a feature quantity. In a case where target structures have a similarity in the degree of accumulation of probes, this indicates that the target structures have similar chemical properties. That is, target structures having similar feature quantities calculated according to the first aspect exhibit similar chemical properties. Therefore, according to the first aspect, the feature quantity accurately showing the chemical properties of a target structure can be calculated. 1. A feature quantity calculating method comprising:a target structure designating step of designating a target structure formed of a plurality of unit structures having chemical properties; anda feature quantity calculating step of calculating a feature quantity obtained by quantifying, in a three-dimensional space, a degree of accumulation of one or more kinds of probes in a periphery of a three-dimensional structure of the target structure and calculating the feature quantity from the target structure using a generator formed through machine learning,wherein the probe is a structure in which a plurality of points having a real electric charge and generating a van der Waals force are disposed to be separated from each other.2. The feature quantity calculating method according to claim 1 ,wherein a compound is designated as the target structure in the target structure designating step, anda first feature quantity which is a feature quantity ...

Подробнее
12-01-2017 дата публикации

AUGMENTED REALITY BASED COMPONENT REPLACEMENT AND MAINTENANCE

Номер: US20170011254A1
Принадлежит:

Augmented reality (AR) based component replacement and maintenance may include receiving a first wireless signal from a pair of AR glasses worn by a user. An image of a component viewed by the user may be analyzed and compared to a plurality of images of components stored in a database that includes information associated with the plurality of images of the components. Based on a match of the image of the component viewed by the user to one of the plurality of images of the components stored in the database, the component viewed by the user may be identified. An inventory of the identified component may be analyzed to determine whether a supplier includes the identified component in stock, and in response to a determination that the supplier includes the identified component in stock, an estimated time of delivery of the identified component to the user may be determined. 1. A augmented reality (AR) based component replacement and maintenance system comprising: analyze the image of the component viewed by the user,', 'compare the image of the component viewed by the user to a plurality of images of components stored in a database, wherein the database includes information associated with the plurality of images of the components, and', 'based on a match of the image of the component viewed by the user to one of the plurality of images of the components stored in the database, identify the component viewed by the user to determine a component detail; and, 'a component identifier, executed by at least one hardware processor, to receive a first wireless signal from a pair of AR glasses worn by a user, wherein the AR glasses include a display viewable by the user and a camera to image a component viewed by the user, wherein the component identifier is to'} whether a supplier includes the identified component in stock, and', 'in response to a determination that the supplier includes the identified component in stock, an estimated time of delivery of the identified ...

Подробнее
14-01-2016 дата публикации

3D GESTURE RECOGNITION

Номер: US20160011668A1
Принадлежит: MICROSOFT CORPORATION

The description relates to 3D gesture recognition. One example gesture recognition system can include a gesture detection assembly. The gesture detection assembly can include a sensor cell array and a controller that can send signals at different frequencies to individual sensor cells of the sensor cell array. The example gesture recognition system can also include a gesture recognition component that can determine parameters of an object proximate the sensor cell array from responses of the individual sensor cells to the signals at the different frequencies, and can identify a gesture performed by the object using the parameters. 1. A system , comprising: a sensor cell array, and', 'a controller configured to send signals at different frequencies to individual sensor cells of the sensor cell array; and, 'a gesture detection assembly including determine parameters of an object proximate the sensor cell array from responses of the individual sensor cells to the signals at the different frequencies, and', 'identify a gesture performed by the object using the parameters., 'a gesture recognition component configured to2. The system of claim 1 , wherein the individual sensor cells are near-field proximity sensors.3. The system of claim 1 , wherein the object is a human body part and the parameters include measurements of a position and a distance of the human body part relative to the sensor cell array over a duration of time.4. The system of claim 1 , further comprising a switching network for sending the signals to the individual sensor cells.5. The system of claim 4 , wherein the switching network comprises switches for directing the signals from a single source to multiple individual sensors of the sensor cell array.6. The system of claim 5 , wherein the switching network is a multilayer switching network.7. The system of claim 4 , wherein the controller is further configured to multiplex the sensor cell array using the switching network.8. The system of claim 7 , ...

Подробнее
12-01-2017 дата публикации

Systems and methods for a dual modality sensor system

Номер: US20170011521A1
Принадлежит: ELWHA LLC

The present disclosure provides systems and methods for using two imaging modalities for imaging an object at two different resolutions. For example, the system may utilize a first modality (e.g., ultrasound or electromagnetic radiation) to generate image data at a first resolution. The system may then utilize the other modality to generate image data of portions of interest at a second resolution that is higher than the first resolution. In another embodiment, one imaging modality may be used to resolve an ambiguity, such as ghost images, in image data generated using another imaging modality.

Подробнее
14-01-2016 дата публикации

IMAGE RECOGNIZING APPARATUS, IMAGE RECOGNIZING METHOD, AND STORAGE MEDIUM

Номер: US20160012277A1
Принадлежит:

The accuracy of estimating the category of an object in an image and its region is improved. The present invention detects as to whether each of a plural types of objects is included in an object image, forms a plurality of local regions in a region including the object detected, and calculates a feature quantity of the plurality of local regions formed. Furthermore, the present invention selects a discriminant criterion adapted to the type of the object detected, from a plurality of discriminant criteria for discriminating the plural types of objects, and determines, based on the discriminant criterion selected and the feature quantity calculated, a region of the object detected from the plurality of local regions. 1. An image recognizing apparatus comprising:a detecting unit configured to detect as to whether each of a plural types of objects is included in an object image;a forming unit configured to form a plurality of local regions in a region including the object detected;a first calculating unit configured to calculate a feature quantity of the plurality of local regions formed; anda determining unit configured to select a discriminant criterion adapted to the type of the object detected, from a plurality of discriminant criteria for discriminating the plural types of objects, and to determine, according to the discriminant criterion selected, and based on the feature quantity calculated by the first calculating unit, a region of the object detected from the plurality of local regions.2. The image recognizing apparatus according to claim 1 , further comprising:a second forming unit configured to form a plurality of local regions in whole of the object image; anda second calculating unit configured to calculate a feature quantity of the plurality of local regions formed by the second forming unit in whole of the object image, wherein the determining unit(1) selects a discriminant criterion for discriminating whole of the object image from the plurality of ...

Подробнее
14-01-2016 дата публикации

SYSTEMS, METHODS, AND DEVICES FOR IMAGE MATCHING AND OBJECT RECOGNITION IN IMAGES USING MINIMAL FEATURE POINTS

Номер: US20160012304A1
Принадлежит:

An image matching technique locates feature points in a template image such as a logo and then does the same in a test image. Feature points of a template image are determined under various transformations and used to determine a set of composite feature points for each template image. The composite feature points are used to determine if the template image is present in a test image. A covering set for a template image is used to optimize processing of test images. 1. A computer-implemented method , implemented by hardware in combination with software , the method comprising:for each particular template image of a plurality of template images:(A) determining a first set of feature points associated with said particular template image;(B) determining a second set of feature points associated with said particular template image, said second set of feature points (i) being a subset of said first set of feature points, and (ii) comprising fewer feature points than said first set of feature points;(C) associating said first set of feature points and said second set of feature points with said particular template image.2. The method of wherein said second set of feature points associated with said particular template image comprises a cover set of feature points associated with said particular template image.3. The method of wherein said cover set of feature points associated with said particular template image comprises a substantially minimal cover set of feature points associated with said particular template image.4. The method of wherein the cover set comprises a subset of said feature points associated with said particular template image wherein one of the feature points in the cover set is associated with substantially every true positive match of feature points associated with said particular template image with feature points associated with a test image.5. The method of further comprising:(D) storing said first set of feature points and said second set of ...

Подробнее
14-01-2016 дата публикации

COMPOSITE IMAGE GENERATION TO REMOVE OBSCURING OBJECTS

Номер: US20160012574A1
Автор: Fang Jun, LI DAQI
Принадлежит:

Technologies are generally described for methods and systems effective to generate a composite image. The methods may include receiving first image data that includes object data corresponding to an object and receiving second image data that includes obscuring data. The obscuring data, if displayed on a display, may obscure at least a portion of the. The methods may also include identifying a first region that may include the object data, in the first image data. The methods may also include identifying a second region, that may include the obscuring data, in the second image data. The methods may also include replacing at least part of the second region with at least part of the first region to generate the composite image data that may include at least some of the object data. The methods may also include displaying the composite image on a display. 1. A method to generate a composite image , the method comprising , by a first device:receiving, from a second device, first image data that includes object data, wherein the object data corresponds to an object;receiving second image data that includes obscuring data, wherein the obscuring data corresponds to at least a part of the second device, the obscuring data, if displayed on a display, would obscure at least a portion of the object;identifying a first region in the first image data, wherein the first region includes the object data;identifying a second region in the second image data, wherein the second region includes the obscuring data;replacing at least part of the second region in the second image data with at least part of the first region, to generate the composite image data, where the composite image data includes at least some of the object data; anddisplaying the composite image on a display.2. The method of claim 1 , wherein the first device includes a vehicle and the display is inside the vehicle.3. The method of claim 1 , wherein the first device includes a first vehicle claim 1 , and the second ...

Подробнее
14-01-2016 дата публикации

FEATURE TRACKABILITY RANKING, SYSTEMS AND METHODS

Номер: US20160012597A1
Автор: Wnuk Kamil
Принадлежит: NANT HOLDINGS IP, LLC

Image feature trackability ranking systems and methods are disclosed. A method of establishing a trackability ranking order from tracked image features within a training video sequence at a tracking analysis device includes establishing a tracking region within the training video sequence using a feature detection algorithm. Trajectories of tracked image features within the tracking region are compiled using a feature tracking algorithm. Saliency metrics are assigned to each one of the trajectories of tracked image features based on one or more feature property measurements within the tracking region, and a trackability ranking algorithm that is a function of the saliency metrics and a defined feature trajectory ranking associated with the training video sequence is determined, the trackability ranking algorithm being usable for ranking, based on trackability, tracked image features within another video sequence. 1. A method of establishing a trackability ranking order from tracked image features within a training video sequence at a tracking analysis device , the method comprising:establishing a tracking region within the training video sequence using a feature detection algorithm;compiling trajectories of tracked image features within the tracking region using a feature tracking algorithm;assigning saliency metrics to each one of the trajectories of tracked image features based on one or more feature property measurements within the tracking region; anddetermining a trackability ranking algorithm that is a function of the saliency metrics and a defined feature trajectory ranking associated with the training video sequence, the trackability ranking algorithm being usable for ranking, based on trackability, tracked image features within another video sequence.2. The method of claim 1 , wherein establishing the tracking region includes locating the tracking region based on at least one image feature identified using the feature detection algorithm.3. The method of ...

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING

Номер: US20160012638A1
Автор: SKROBANSKI George
Принадлежит:

Methods and apparatuses are described for generating three-dimensional representations of sets of objects automatically from point cloud data of those sets of objects. The methods and apparatuses partition the space occupied by the objects into a plurality of volumes. If points within a volume approximately coexist on a surface, such a volume is designated a surface volume. Surface volumes with approximately coplanar surfaces are combined to form larger surface volumes. If points within a volume approximately coexist on a line, such a volume is designated a line volume. Line volumes with approximately collinear lines are combined to form larger line volumes. 1. An apparatus for converting point cloud data of a set of one or more objects into three-dimensional representations of the set of objects , comprising:means for partitioning a space occupied by the set of objects into a plurality of volumes;means for determining whether the points in each volume approximately coexist on a surface, wherein such volumes are designated surface volumes and the surface on which points coexist is designated as the surface of the surface volume;means for combining a first plurality of neighbouring surface volumes which approximately coexist along similar surfaces into a surface volume set, wherein the similar surfaces are identified as a larger surface; andmeans for identifying a surface edge volume set of the surface volume set, the surface edge volume set comprising volumes in the surface volume set that neighbour empty volumes which are intersected by the larger surface.2. The apparatus of claim 1 , further comprising means for generating point cloud data of the set of objects.3. The apparatus of claim 2 , wherein the means for generating point cloud data is a light detecting and ranging—LIDAR—scanner.4. The apparatus of claim 2 , wherein the means for generating point cloud data is a scanner comprising:a camera configured to capture one or more images of a set of objects; anda ...

Подробнее
10-01-2019 дата публикации

Object ingestion through canonical shapes, systems and methods

Номер: US20190012557A1
Принадлежит: NANT HOLDINGS IP LLC

An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.

Подробнее
10-01-2019 дата публикации

Enhanced Contrast for Object Detection and Characterization By Optical Imaging Based on Differences Between Images

Номер: US20190012564A1
Автор: HOLZ David S., YANG Hua
Принадлежит: Leap Motion, Inc.

Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. 1. A method of capturing and analyzing an image , the method comprising: operate the at least one camera to capture a sequence of images including a first image captured at a time when the at least one light source is illuminating a field of view;', 'identify pixels corresponding to an object of interest rather than to a background;', 'based on the identified pixels, construct a 3D model of the object of interest, including a position and shape of the object of interest; and', 'distinguish between (i) foreground image components corresponding to objects located within a proximal zone of the field of view, the proximal zone extending from the at least one camera and having a depth relative thereto of at least twice an expected maximum distance between the objects corresponding to the foreground image components and the at least one camera, and (ii) background image components corresponding to objects located within a distal zone of the field of view, the distal zone being located, relative to the at least one camera, beyond the proximal zone., 'utilizing an image analyzer coupled to at least one camera and at least one light source to2. The method of claim 1 , wherein the proximal zone has a depth of at least four times the expected maximum distance.3. The method of claim 1 , wherein the at least one light source is a diffuse emitter.4. The method of claim 3 , wherein the at least one light source is an infrared light-emitting diode and the at least one camera is an infrared-sensitive ...

Подробнее
14-01-2021 дата публикации

OBJECT DETECTION IN POINT CLOUDS

Номер: US20210012089A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data representing a sensor measurement of a scene captured by one or more sensors to generate an object detection output that identifies locations of one or more objects in the scene. When deployed within an on-board system of a vehicle, the object detection output that is generated can be used to make autonomous driving decisions for the vehicle with enhanced accuracy. 1. A method comprising:obtaining point cloud data representing a sensor measurement of a scene captured by one or more sensors, the point cloud data comprising a plurality of three-dimensional points in the scene;determining, based on the three-dimensional points in the scene, a plurality of two-dimensional proposal locations;generating, for each two-dimensional proposal location, a feature representation from three-dimensional points in the point cloud data that are near the two-dimensional proposal location; andprocessing the feature representations of the two-dimensional proposal locations using an object detection neural network that is configured to generate an object detection output that identifies objects in the scene.2. The method of claim 1 , wherein each three-dimensional point has respective (x claim 1 ,y) coordinates and wherein determining claim 1 , based on the three-dimensional points in the scene claim 1 , a plurality of two-dimensional proposal locations comprises:sampling a fixed number of two-dimensional proposal locations from among the (x,y) coordinates of the three-dimensional points in the scene.3. The method of claim 2 , wherein sampling the fixed number of two-dimensional proposal locations comprises:sampling the fixed number of two-dimensional proposal locations using farthest point sampling.4. The method of claim 2 , wherein sampling the fixed number of two-dimensional proposal locations comprises:sampling the fixed number of two-dimensional proposal ...

Подробнее
14-01-2021 дата публикации

METHOD AND SYSTEM FOR 3D CORNEA POSITION ESTIMATION

Номер: US20210012105A1
Принадлежит: Tobii AB

There is provided a method, system, and non-transitory computer-readable storage medium for performing three-dimensional, 3D, position estimation for the cornea center of an eye of a user, using a remote eye tracking system, wherein the position estimation is reliable and robust also when the cornea center moves over time in relation to an imaging device associated with the eye tracking system. This is accomplished by generating, using, and optionally also updating, a cornea movement filter, CMF, in the cornea center position estimation. 1) A method for performing three-dimensional , 3D , position estimation for the cornea center of an eye of a user , using a remote eye tracking system , when the cornea center moves over time in relation to an imaging device associated with the eye tracking system , the method comprising:generating, using processing circuitry associated with the eye tracking system, a cornea movement filter, CMF, comprising an estimated initial 3D position and an estimated initial 3D velocity of the cornea center of the eye at a first time instance; a first two-dimensional, 2D, glint position in an image captured at a second time instance by applying the cornea movement filter, CMF, wherein the predicted first glint position represents a position where a first glint is predicted to be generated by a first illuminator associated with the eye tracking system; and', 'a second 2D glint position in an image captured at the second time instance by applying the cornea movement filter, CMF, wherein the predicted second glint position represents a position where a glint is predicted to be generated by a second illuminator associated with the eye tracking system;, 'predicting, using the processing circuitry identifying at least one first candidate glint in a first image captured by the imaging device at the second time instance wherein the first image comprises at least part of the cornea of the eye and at least one glint generated by the first illuminator; ...

Подробнее
14-01-2021 дата публикации

3D IMAGE SYNTHESIS SYSTEM AND METHODS

Номер: US20210012162A1
Принадлежит:

Aspects of the technology described herein provide a system for improved synthesis of a target domain image from a source domain image. A generator that performs the synthesis is formed based on texture propagation from the first domain to the second domain by making use of a bidirectional generative adversarial network. A framework is provided for training that includes texture propagation with a shape prior constraint. 120-. (canceled)21. A non-transitory computer-readable storage device encoded with instructions that , when executed , cause one or more processors of a system to perform operations , comprising:receiving a first source image in a source domain and a first target image in a target domain;training a 3D image synthesizing network with the first source image and the first target image, the training being based at least in part by generating geometric structure information of the first source image and the first target image, and providing the geometric structure information as two separate inputs to a dual-arranged synthesizer; andsynthesizing a second target image from a second source image via the 3D image synthesizing network.22. The non-transitory computer-readable storage device of claim 21 , wherein the operations further comprising:reducing a bidirectional adversarial loss for the dual-arranged synthesizer having two dual-arranged generators and two corresponding discriminators, the bidirectional adversarial loss being configured to simultaneously reduce a first visual similarity between a synthesized target image and the first target image, and a second visual similarity between a synthesized source image and the first source image.23. The non-transitory computer-readable storage device of claim 22 , wherein the operations further comprising:reducing a combination of the bidirectional adversarial loss and a domain adapted loss for the 3D image synthesizing network, the domain adapted loss being configured to reduce domain discrepancy between ...

Подробнее
14-01-2021 дата публикации

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20210012300A1
Принадлежит: NEC Corporation

Provided is an information processing system including: a detection means for, based on a shape of an object carried in by a customer, detecting a carrying-in form of a product to be purchased by the customer; and a notification information generation means for generating notification information used for providing a notification in accordance with the carrying-in form to the customer. 1. An information processing system comprising:a detection unit that, based on a shape of an object carried in by a customer, detects a carrying-in form of a product to be purchased by the customer; anda notification information generation unit that generates notification information used for providing a notification in accordance with the carrying-in form to the customer.2. The information processing system according to claim 1 , wherein the notification is to urge the customer to move the product to a place in accordance with the carrying-in form.3. The information processing system according to claim 1 , wherein when the carrying-in form is a form of a cart loaded with the product claim 1 , the notification information generation unit generates notification information used for urging the customer to move the cart to a reading region of an identification information acquisition apparatus used for product registration.4. The information processing system according to claim 1 , wherein when the carrying-in form is a form of a basket or a bag containing the product claim 1 , the notification information generation unit generates notification information used for urging the customer to move the basket or the bag onto a reading stage having an identification information acquisition apparatus used for product registration.5. The information processing system according to claim 1 , wherein when the carrying-in form is a form of the product alone claim 1 , the notification information generation unit generates notification information used for urging the customer to place the product on a ...

Подробнее
14-01-2021 дата публикации

METHOD FOR DETERMINING PROJECTING EDGES OF A TARGET ON AN IMAGE

Номер: US20210012528A1
Принадлежит:

A method for locating a three-dimensional target with respect to a vehicle is disclosed including capturing an image of the target, and from a three-dimensional mesh of the target, and from an estimation of the pose of the target, determining a set of projecting edges of the mesh of the target in the pose. The step of determining the projecting edges of the mesh of the target includes positioning the mesh of the target according to the pose, projecting in two dimensions the mesh so positioned, scanning the projection of the mesh in a plurality of scanning rows and, for each scanning row: defining a set of segments, each segment corresponding to the intersection of a face of the mesh with the scanning row and being defined by its ends, analyzing the relative depths of the ends of the segments, the depth being the position along a third dimension orthogonal to the two dimensions of the projection, in order to select a set of end points of segments corresponding to projecting edges of the mesh. 1. A method for locating a three-dimensional target with respect to a vehicle , implemented by a system comprising a sensor suitable for capturing images of the target , and a computer , and comprising:capturing an image of the target, andfrom a three-dimensional mesh of the target, and from an estimation of the pose of the target, determining a set of projecting edges of the mesh of the target in said pose, andwherein determining the projecting edges of the mesh of the target comprises:a) positioning the mesh of the target according to the pose,b) projecting in two dimensions the mesh so positioned,c) scanning the projection of the mesh with a plurality of scanning rows and, for each scanning row:defining a set of segments, each segment corresponding to the intersection of a face of the mesh with the scanning row and being defined by its ends,analyzing the relative depths of the ends of the segments, the depth being the position along a third dimension orthogonal to the two ...

Подробнее
09-01-2020 дата публикации

METHODS AND SYSTEMS FOR TRAINING AN OBJECT DETECTION ALGORITHM USING SYNTHETIC IMAGES

Номер: US20200012846A1
Принадлежит: SEIKO EPSON CORPORATION

A non-transitory computer readable medium embodies instructions that cause one or more processors to perform a method. The method includes: (A) receiving a selection of a D model stored in one or more memories, the D model corresponding to an object, and (B) setting a camera parameter set for a camera for use in detecting a pose of the object in a real scene. The method also includes (C) receiving a selection of data representing a view range, (D) generating at least one D synthetic image based on the camera parameter set by rendering the D model in the view range, (E) generating training data using the at least one D synthetic image to train an object detection algorithm, and (F) storing the generated training data in one or more memories. 1. A non-transitory computer readable medium that embodies instructions that cause one or more processors to perform a method comprising:(A) receiving a selection of a 3D model stored in one or more memories, the 3D model corresponding to an object;(B) setting a camera parameter set for a camera for use in detecting a pose of the object in a real scene by:receiving information identifying an object detection device including the camera,acquiring, based at least in part on the information identifying the object detection device, the camera parameter set for the object detection device from a plurality of the camera parameter sets stored in one or more memories, wherein each camera parameter set of the plurality of camera parameter sets is associated in the one or more memories with at least one object detection device of a plurality of different object detection devices; and(C) generating at least one 2D synthetic image based at least on the camera parameter set by rendering the 3D model in a view range for generating training data.2. The non-transitory computer readable medium according to claim 1 , wherein the method further comprising:(D) generating training data using the at least one 2D synthetic image to train an object ...

Подробнее
09-01-2020 дата публикации

OBJECT DETECTION APPARATUS, CONTROL METHOD IMPLEMENTED BY OBJECT DETECTION APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM

Номер: US20200012847A1
Принадлежит: FUJITSU LIMITED

An object detection apparatus includes: a camera configured to capture an image of an object; one or more of sensor devices each of which is configured to detect an environmental change; and a processor configured to (a): execute a determining process that includes, when any one of the one or more of sensor devices detects an environmental change, detecting a search starting point of the object based on at least one of a time corresponding to the detection and detection information from the sensor device, (b): execute an entry registering process that includes registering an entry with reference information when the object is detected, the entry including at least one of the time and the detection information and a direction in which the object is detected, wherein the determining process is configured to determine the direction toward which the camera is to be turned based on the reference information. 1. An object detection apparatus comprising:a camera configured to capture an image of an object;one or more of sensor devices, each of the one or more of sensor devices being configured to detect an environmental change; anda processor configured toexecute a determining process that includes, in a case where any one of the one or more of sensor devices detects an environmental change, detecting a search starting point of the object based on at least one of a time corresponding to the detection and detection information from the sensor device, the search starting point corresponding to a direction toward which the camera is turned,execute an entry registering process that includes registering an entry with reference information when the object is detected, the entry including at least one of the time and the detection information and a direction in which the object is detected,wherein the determining process is configured to determine the direction toward which the camera is to be turned based on the reference information.2. The object detection apparatus according ...

Подробнее
03-02-2022 дата публикации

MACHINE LEARNING CONTROL OF OBJECT HANDOVERS

Номер: US20220032454A1
Принадлежит:

A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers. 1. A processor , comprising one or more computers comprising one or more processors to:obtain a point cloud that represents a hand holding an object;determine, from a first portion of the point cloud, a pose of the object;determine, from a second portion of the point cloud, a pose of the hand;generate a set of grasp poses that allow a robot to grasp the object;select, from the set of grasp poses, based at least in part on the pose of the hand, a target grasp pose that does not interfere with the hand; andcause the robot to perform the target grasp pose.2. The processor of claim 1 , wherein the pose of the hand identifies a plurality of segments and joint angles.3. The processor of claim 1 , wherein the one or more processors:obtain a three-dimensional image from a depth camera; andproduce the point cloud from the three-dimensional image.4. The processor of claim 1 , wherein:the set of grasp poses are poses for a robotic gripper of the robot; andthe robotic gripper has two opposed digits that perform the grasp.5. The processor of claim 1 , wherein the pose of the object includes three angles that indicate an orientation of the object and information that identifies a position of the object.6. The processor of claim 1 , wherein the robot takes the object from the hand.7. A system claim 1 , comprising:one or more processors coupled to computer-readable media; determine, from a three- ...

Подробнее
09-01-2020 дата публикации

THREE-DIMENSIONAL BODY SCANNING AND APPAREL RECOMMENDATION

Номер: US20200013110A1
Автор: Andon Christopher
Принадлежит:

Devices, systems, and methods include a three-dimensional (3D) scanning element, an electronic data storage configured to store a database including fields for 3D scan data and demographic information, a processor, and a user interface. In an example, the processor obtains 3D scan data of a body part of a subject from the 3D scanning element, analyzes the 3D scan data for incomplete regions, generate a composite 3D image of 3D scan data from the database based on similarities of demographic information, and overlays composite 3D image regions corresponding to incomplete regions on the 3D scan data. 1. (canceled)2. A system , comprising:a three-dimensional (3D) scanning element;an electronic data storage configured to store a database including fields for 3D scan data and activity data;a processor, coupled to the 3D scanning element and the electronic data storage, configured to:determine changes between a first 3D scan data obtained at a first time and a second 3D scan data obtained at a second time;obtain activity information from the database; anddetermine a difference between the changes as determined and anticipated changes based on the activity information; anda user interface, coupled to the processor, configured to present information indicative of the difference between the changes and the anticipated changes.3. The system of claim 2 , wherein the information includes a recommendation to change at least one of an article of equipment or future user activities.4. The system of claim 2 , wherein the first and second 3D scan data relates to a body part of a user and wherein the changes reflect a change in a physiological property of the body part.5. The system of claim 4 , wherein the anticipated change is based on an article of equipment being utilized by the user in activities related to the activity information.6. The system of claim 2 , wherein the first and second 3D scan data relates to an article of equipment and wherein the changes reflect wear on the ...

Подробнее