Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 425. Отображено 100.
06-01-2022 дата публикации

Apparatus and Method For Three-Dimensional Object Recognition

Номер: US20220004740A1
Принадлежит:

The present application relates to a method for recognising at least one object in a three-dimensional scene, the method including, in an electronic processing device: determining a plurality of two-dimensional images of the scene, the images at least partially including the at least one object; determining a plurality of two-dimensional segmentations of the at least one object, the two-dimensional segmentations corresponding to the two dimensional images; generating a three-dimensional representation of the scene using the images; generating a mapping indicative of a correspondence between the images and the representation; and using the mapping to map the plurality of segmentations to the three dimensional representation, to thereby recognise the at least one object in the scene. 1. A method for recognising at least one object in a three-dimensional scene , the method including , in an electronic processing device:determining a plurality of two-dimensional images of the scene, the images at least partially including the at least one object;determining a plurality of two-dimensional segmentations of the at least one object, the two-dimensional segmentations corresponding to the two dimensional images;generating a three-dimensional representation of the scene using the images;generating a mapping indicative of a correspondence between the images and the representation; andusing the mapping to map the plurality of segmentations to the three dimensional representation, to thereby recognise the at least one object in the scene.2. A method according to claim 1 , wherein the method includes claim 1 , in an electronic processing device:determining a pose estimation for each of the two-dimensional images;generating a plurality of two-dimensional representations of the three dimensional representation using the pose estimations, each two dimensional representation corresponding to a respective two dimensional image; and, generating the mapping using the two-dimensional ...

Подробнее
07-01-2021 дата публикации

BENDING ESTIMATION DEVICE, BENDING ESTIMATION METHOD, AND PROGRAM

Номер: US20210003390A1

Even when a missing portion occurs in a solid data set on a columnar structure, an estimator for a deflection value and an accuracy of the deflection value are correctly estimated according to an extent of the missing portion and the like. A measurement accuracy estimation unit () is included that: calculates a deflection of a columnar structure and an extent of a missing portion, from a solid data set on the columnar structure; calculates an accuracy assessment indicator for the deflection that is acquirable when a plurality of missing portion patterns occur on a virtual basis, based on a plurality of solid data sets in each of which the calculated extent of the missing portion is smaller than a preset threshold value, the accuracy assessment indicator being calculated for each missing portion pattern; and calculates an accuracy of the deflection calculated from the solid data set, based on the calculated accuracy assessment indicator for each missing portion pattern, and based on the calculated extent of the missing portion in the solid data set. 1. A deflection estimation device comprising:a processor; anda storage medium having computer program instructions stored therein, when executed by the processor, perform to:calculate a deflection of a columnar structure and an extent of a missing portion, from a solid data set on the columnar structure;calculates an accuracy assessment indicator for the deflection that is acquirable when a plurality of missing portion patterns occur on a virtual basis, based on a plurality of the solid data sets in each of which the extent of the missing portion is smaller than a preset threshold value, the accuracy assessment indicator being calculated for each of the missing portion patterns; andthat calculates an accuracy of the deflection calculated from the solid data set, based on the accuracy assessment indicator for each missing portion pattern, and based on the extent of the missing portion in the solid data set.2. The ...

Подробнее
13-01-2022 дата публикации

MULTI-USER INTELLIGENT ASSISTANCE

Номер: US20220012470A1
Принадлежит: Microsoft Technology Licensing, LLC

An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user. 1. An intelligent assistant computer , comprising:a logic machine; anda storage machine holding instructions executable by the logic machine to:recognize another intelligent assistant computer;record speech spoken by a first user;determine a self-selection score for the first user based on the speech spoken by the first user;receive a remote-selection score for the first user from the other intelligent assistant computer;if the self-selection score is greater than the remote-selection score, respond to the first user, determine a disengagement metric of the first user based on recorded speech spoken by the first user, and block subsequent responses to all other users until the disengagement metric of the first user exceeds a blocking threshold;if the self-selection score is less than the remote-selection score, do not respond to the first user; andstop blocking subsequent responses to another user responsive to a new self-selection score for the first user being less than a new remote-selection score for the first user.2. The intelligent assistant computer of claim 1 , wherein the self-selection score is determined based further on a signal-to-noise ratio of recorded speech spoken by the first user. ...

Подробнее
07-01-2021 дата публикации

UNIFIED SHAPE REPRESENTATION

Номер: US20210004645A1
Принадлежит:

Techniques are described herein for generating and using a unified shape representation that encompasses features of different types of shape representations. In some embodiments, the unified shape representation is a unicode comprising a vector of embeddings and values for the embeddings. The embedding values are inferred, using a neural network that has been trained on different types of shape representations, based on a first representation of a three-dimensional (3D) shape. The first representation is received as input to the trained neural network and corresponds to a first type of shape representation. At least one embedding has a value dependent on a feature provided by a second type of shape representation and not provided by the first type of shape representation. The value of the at least one embedding is inferred based upon the first representation and in the absence of the second type of shape representation for the 3D shape. 1. A method comprising:receiving, by a computing system, a first representation of a three-dimensional (3D) shape, wherein the first representation corresponds to a first type of shape representation; andgenerating, by the computing system and using a neural network that has been trained on different types of shape representations, a unicode representation for the 3D shape, wherein the unicode representation comprises a vector of embeddings and values for the embeddings, the values being inferred by the neural network based on the first representation, wherein the vector includes at least one embedding whose value is dependent on a feature provided by a second type of shape representation and not provided by the first type of shape representation, and wherein the generating comprises inferring the value of the at least one embedding based upon the first representation and in the absence of the second type of shape representation for the 3D shape.2. The method of claim 1 , wherein the neural network has been trained on at least the ...

Подробнее
04-01-2018 дата публикации

Face model matrix training method and apparatus, and storage medium

Номер: US20180005017A1
Принадлежит: Tencent Technology Shenzhen Co Ltd

Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.

Подробнее
02-01-2020 дата публикации

SYSTEMS AND METHODS OF 3D SCENE SEGMENTATION AND MATCHING FOR ROBOTIC OPERATIONS

Номер: US20200005072A1
Принадлежит:

A method and system, the method including receive image data representations of a set of images of a physical asset; receive a data model of at least one asset, the data model of each of the at least one assets including a semantic description of the respective modeled asset and at least one operation associated with the respective modeled asset; determine a match between the received image data and the data model of one of the at least one assets based on a correspondence therebetween; generate, for the data model determined to be a match with the received image data, an operation plan based on the at least one operation included in matched data model; execute, in response to the generation of the operation plan, the generated operation plan by the physical asset. 1. A system comprising:a memory storing executable program instructions therein; anda processor in communication with the memory, the processor operative to execute the program instructions to:receive image data representations of a set of images of a physical asset;receive a data model of at least one asset, the data model of each of the at least one assets including a semantic description of the respective modeled asset and at least one operation associated with the respective modeled asset;determine a match between the received image data and the data model of one of the at least one assets based on a correspondence therebetween;generate, for the data model determined to be a match with the received image data, an operation plan based on the at least one operation included in matched data model;execute, in response to the generation of the operation plan, the generated operation plan by the physical asset.2. The system of claim 1 , wherein the determining of the match between the received image data and the data model of one of the at least one assets based on a correspondence therebetween comprises:extracting features from the received image data; andcomparing the features extracted from the received ...

Подробнее
20-01-2022 дата публикации

THREE-DIMENSIONAL POSITION AND POSTURE RECOGNITION DEVICE AND METHOD

Номер: US20220019762A1
Принадлежит:

A three-dimensional position and posture recognition device speeds estimation of a position posture and a gripping coordinate posture of a gripping target product. The device includes: a sensor unit configured to measure a distance between an image of an object and the object; and a processing unit configured to calculate an object type included in the image, read model data of each object from the external memory, and create structured model data having a resolution set for each object from the model data, generate measurement point cloud data of a plurality of resolutions from information on a distance between an image of the object and the object, perform a K neighborhood point search using the structured model data and the measurement point cloud data, and perform three-dimensional position recognition processing of the object by rotation and translation estimation regarding a point obtained from the K neighborhood point search. 1. A three-dimensional position recognition device , comprising:an external memory configured to store model data of each object;a sensor unit configured to measure a distance between an image of an object and the object; anda processing unit connected to the external memory and the sensor unit and configured tocalculate an object type included in the image based on information from the sensor unit,read model data of each object from the external memory according to the object type, and create structured model data having a resolution set for each object from the model data,generate measurement point cloud data of a plurality of resolutions from information on a distance between an image of the object and the object from the sensor unit,perform a K neighborhood point search using the structured model data and the measurement point cloud data of each resolution of the measurement point cloud data of the plurality of resolutions, andperform three-dimensional position recognition processing of the object by rotation and translation ...

Подробнее
12-01-2017 дата публикации

Field-invariant quantitative magnetic-resonance signatures

Номер: US20170011255A1
Принадлежит: Tesla Health Inc

A system that determines an invariant magnetic-resonance (MR) signature of a biological sample is disclosed. During operation, the system determines a magnetic-resonance (MR) model of voxels in a biological sample based on differences between MR signals associated with the voxels in multiple scans and simulated MR signals. The MR signals are measured or captured by an MR scanner in the system during multiple MR scans, and based on scanning instructions, and the simulated MR signals for the biological sample are generated using the MR model and the scanning instructions. Moreover, the system iteratively modifies the scanning instructions (including a magnetic-field strength and/or a pulse sequence) in the MR scans based on the differences until a convergence criterion is achieved. Then, the system stores, in memory, an identifier of the biological sample and a magnetic-field-strength-invariant MR signature of the biological sample that is associated with the MR model.

Подробнее
14-01-2016 дата публикации

Image analysis for making animal measurements including 3-d image analysis

Номер: US20160012278A1
Автор: Mark Dunn, Thomas Banhazi
Принадлежит: Plf Agritech Pty Ltd

A computer-implemented image analysis process including accessing image data and range data representing an image of an animal, measuring an object volume from the range data and estimating the animal's weight using the dimensions representing the animal's size. A database containing relative volume and weight information can then be used to accurately predict the animal's weight from calculating its volume.

Подробнее
11-01-2018 дата публикации

AUTOMATIC ANALYSER

Номер: US20180012375A1
Автор: TACHIBANA Shinji
Принадлежит:

A two-dimensional code is attached to a location of a reagent storage unit which is visually recognizable from the outside, and a coordinate position of the two-dimensional code in a coordinate system of the two-dimensional code and coordinate information of an installation position of a reagent bottle are held. After that, an image of the two-dimensional code is captured by a portable terminal so that a coordinate system of an image capture unit of the portable terminal is converted into the coordinate system of the two-dimensional code using AR technology. The coordinate information of the installation position of the reagent bottle in the coordinate system of the two-dimensional code is regarded as positional coordinates in the captured image on the basis of the conversion, thereby ascertaining the position of the reagent bottle on the captured image and displaying the ascertained position on a display unit. 1. An automatic analyser that dispenses a sample and a reagent to each of a plurality of reaction vessels to react the sample and the reagent with each other and measures a liquid obtained by reacting the sample and the reagent with each other , the automatic analyser comprising:a reagent storage unit that holds a reagent bottle accommodating the reagent;a two-dimensional code which is attached to any position within the automatic analyser;an image capture unit that captures images of the reagent storage unit and the two-dimensional code;an information acquisition unit that acquires coordinate position information of the two-dimensional code within the automatic analyser, coordinate position information of the reagent bottle held within the reagent storage unit, and reagent information of the held reagent;an image processing unit that identifies the two-dimensional code captured by the image capture unit to specify coordinate position coordinate in the captured image of the two-dimensional code, converts a coordinate system of the captured image obtained by ...

Подробнее
14-01-2021 дата публикации

METHOD AND SOFTWARE SYSTEM FOR MODELING, TRACKING AND IDENTIFYING ANIMATE BEINGS AT REST AND IN MOTION AND COMPENSATING FOR SURFACE AND SUBDERMAL CHANGES

Номер: US20210012513A1
Автор: Park Junho
Принадлежит:

Methods and systems for creating 3D models of biological entities from different types of sensor data are provided. For instance, these methods can track an underlying network of nodes corresponding to blood vessel networks in 3 dimensions. Such methods adapt models to compensate for changes on the surface and in the structure that continuously occur in living entities, such as when blood flows, hands stretch, heads turn, and the like. These 3D models can then be used to perform functions such as motion tracking, biometric authentication, and visualizations in air (such as with Augmented and Virtual Reality) using 3D models as positional references. 1. A method for a system to perform 3D model creation and matching , the method comprising:building a model for an object, the model including structure information about the object;generating a first probability unit of the model, wherein the first probability unit includes a first probability distribution of a state of the model and a second probability distribution of a state of the object;comparing the first probability unit with a second probability unit generated through observed data of the object, via a matching path based on the structure information;generating a related probability distribution associated with the matching path; andpredicting the state of the object based on the related probability distribution.2. The method of claim 1 , wherein the state of the model includes position claim 1 , orientation claim 1 , 6 degrees of freedom (6DoF) claim 1 , velocity claim 1 , acceleration claim 1 , color claim 1 , and/or element types.3. The method of claim 1 , wherein an identified part in the model is associated with the first probability unit claim 1 , wherein the identified part corresponds to a part in the object with respect to the related probability distribution.4. The method of claim 1 , wherein the comparing is performed via a plurality of matching paths claim 1 , including the matching path based on the ...

Подробнее
09-01-2020 дата публикации

INTELLIGENT ASSISTANT

Номер: US20200012906A1
Принадлежит: Microsoft Technology Licensing, LLC

Examples are disclosed herein that relate to entity tracking. One examples provides a computing device comprising a logic processor and a storage device holding instructions executable by the logic processor to receive image data of an environment including a person, process the image data using a face detection algorithm to produce a first face detection output at a first frequency, determine an identity of the person based on the first face detection output, and process the image data using another algorithm that uses less computational resources of the computing device than the face detection algorithm. The instructions are further executable to track the person within the environment based on the tracking output, and perform one or more of updating the other algorithm using a second face detection output, and updating the face detection algorithm using the tracking output. 1. A computing device , comprising:a logic processor; and receive image data of an environment including a person;', 'process the image data using a face detection algorithm to produce a first face detection output at a first frequency;', 'select at least one tracking algorithm that uses less computational resources of the computing device than the face detection algorithm, and produces a tracking output at a second frequency greater than the first frequency;', 'process the image data using the at least one tracking algorithm; and', 'track the person within the environment based on the tracking output produced by the at least one tracking algorithm., 'a storage device holding instructions executable by the logic processor to2. The computing device of claim 1 , wherein the instructions are executable to select the at least one tracking algorithm based on available computing resources of the computing device.3. The computing device of claim 1 , wherein the instructions are executable to select the at least one tracking algorithm based on a battery life condition associated with the computing ...

Подробнее
09-01-2020 дата публикации

SPACE COORDINATE CONVERTING SERVER AND METHOD THEREOF

Номер: US20200013187A1
Принадлежит:

A space coordinate converting server and method thereof are provided. The space coordinate converting server receives a field video recorded with a 3D object from an image capturing device, and generates a point cloud model accordingly. The space coordinate converting server determines key frames of the field video, and maps the point cloud model to key images of the key frames based on rotation and translation information of the image capturing device for generating a characterized 3D coordinate set. The space coordinate converting server determines 2D coordinates of the 3D object in key images, and selects 3D coordinates from the characterized 3D coordinate set according to the 2D coordinates. The space coordinate converting server determines a space coordinate converting relation according to marked points of the 3D object and the 3D coordinates. 1. A space coordinate converting method for a space coordinate converting server , comprising:receiving, by the space coordinate converting server, a field video from an image capturing device, wherein the field video is recorded with a 3D object, and the 3D object has a plurality of marked points;generating, by the space coordinate converting server, a point cloud model according to the field video, wherein the point cloud model comprises a plurality of points data;determining, by the space coordinate converting server, a plurality of key frames of the field video, wherein each of the plurality of key frames comprises a key image and a rotation and translation information of the image capturing device;mapping, by the space coordinate converting server, the plurality of points data of the point cloud model to the key image of each of the plurality of key frames based on the corresponding rotation and translation information of the image capturing device of each of the plurality of key frames for generating a 3D coordinate set corresponding to the key image of each of the plurality of key frames;determining, by the space ...

Подробнее
14-01-2016 дата публикации

IMAGING APPARATUS AND CONTROL METHOD THEREOF

Номер: US20160014344A1
Автор: BYUN Mi Ae, YOO Jun Sang
Принадлежит:

Disclosed herein is an imaging apparatus including: an image producer configured to produce an image of an object; and an image information generator configured to identify the object, to receive geometry change information for the image of the object, and to generate extraction information corresponding to a geometry image of the object changed according to the geometry change information. Accordingly, it is possible to intuitively display a user's desired information. 1. An imaging apparatus comprising:an image producer configured to produce an image of an object; andan image information generator configured to identify the object, to receive geometry change information for the image of the object, and to generate extraction information corresponding to a geometry image of the object changed according to the geometry change information.2. The imaging apparatus according to claim 1 , wherein the image information generator comprises:a storage unit configured to store reference information corresponding to the geometry image of the object; andan extraction information calculator configured to extract the reference information corresponding to the geometry image of the object changed according to the geometry change information, from the storage unit, and to generate extraction information corresponding to the geometry image of the object based on the reference information.3. The imaging apparatus according to claim 2 , wherein the storage unit stores at least one information among measurement information claim 2 , landmark information claim 2 , and Doppler information of the object claim 2 , as the reference information corresponding to the geometry image of the object.4. The imaging apparatus according to claim 1 , wherein the image information generator comprises an object identifier configured to identify the object based on the image of the object and to generate object information claim 1 , andwherein the image information generator generates extraction ...

Подробнее
15-01-2015 дата публикации

TAGGING VIRTUALIZED CONTENT

Номер: US20150016714A1
Автор: Chui Clarence
Принадлежит:

Techniques for tagging virtualized content are disclosed. In some embodiments, a modeled three-dimensional scene of objects representing abstracted source content is generated and analyzed to determine a contextual characteristic of the scene that is based on a plurality of objects comprising the scene. The modeled scene is tagged with a tag specifying the determined contextual characteristic. 1. A system , comprising: generate a modeled three-dimensional scene of objects representing abstracted source content;', 'analyze the modeled three-dimensional scene to determine a contextual characteristic of the scene, wherein the contextual characteristic is based on a plurality of objects comprising the scene; and', 'tag the scene with a tag specifying the determined contextual characteristic; and, 'a processor configured toa memory coupled to the processor and configured to provide the processor with instructions.2. The system of claim 1 , wherein the source content comprises two-dimensional content.3. The system of claim 1 , wherein the modeled three-dimensional scene is specified by object definitions and relative positions and orientations of the objects in the scene.4. The system of claim 1 , wherein to analyze the modeled three-dimensional scene to determine a contextual characteristic of the scene comprises to compute a metric associated with the scene.5. The system of claim 1 , wherein to analyze the modeled three-dimensional scene to determine a contextual characteristic of the scene comprises to perform a statistical analysis of objects comprising the scene.6. The system of claim 1 , wherein the contextual characteristic is based on a number or a density of objects in the scene.7. The system of claim 1 , wherein the contextual characteristic is based on a type or a class of objects comprising the scene.8. The system of claim 1 , wherein the contextual characteristic is based on a spatial relationship of objects comprising the scene.9. The system of claim 1 , ...

Подробнее
03-02-2022 дата публикации

ROAD SURFACE DETECTION DEVICE AND ROAD SURFACE DETECTION PROGRAM

Номер: US20220036097A1
Принадлежит: AISIN CORPORATION

A road surface detection device according to an embodiment includes an image acquisition unit that acquires captured image data output from a stereo camera that captures an imaging area including a road surface on which a vehicle travels, a three-dimensional model generation unit that generates a three-dimensional model of the imaging area including a surface shape of the road surface from a viewpoint of the stereo camera based on the captured image data, and a correction unit that estimates a plane from the three-dimensional model, and corrects the three-dimensional model so as to match an orientation of a normal vector of the plane and a height position of the plane with respect to the stereo camera with a correct value of an orientation of a normal vector of the road surface and a correct value of a height position of the road surface with respect to the stereo camera, respectively. 1. A road surface detection device comprising:an image acquisition unit that acquires captured image data output from a stereo camera that captures an imaging area including a road surface on which a vehicle travels;a three-dimensional model generation unit that generates a three-dimensional model of the imaging area including a surface shape of the road surface from a viewpoint of the stereo camera based on the captured image data; anda correction unit that estimates a plane from the three-dimensional model, and corrects the three-dimensional model so as to match an orientation of a normal vector of the plane and a height position of the plane with respect to the stereo camera with a correct value of an orientation of a normal vector of the road surface and a correct value of a height position of the road surface with respect to the stereo camera, respectively.2. The road surface detection device according to claim 1 , whereinthe correction unit matches the orientation of the normal vector of the plane with the correct value of the orientation of the normal vector of the road surface ...

Подробнее
17-01-2019 дата публикации

SYSTEMS AND METHODS FOR IDENTIFYING REAL OBJECTS IN AN AREA OF INTEREST FOR USE IN IDENTIFYING VIRTUAL CONTENT A USER IS AUTHORIZED TO VIEW USING AN AUGMENTED REALITY DEVICE

Номер: US20190019011A1
Принадлежит:

Identifying real objects in an area of interest for use in identifying virtual content a user is authorized to view using an augmented reality device. Particular methods and systems determine a set of real objects that are near a first position of a first augmented reality device, determine, from the set of real objects, a first subset of real objects that are associated with virtual content that the first augmented reality device is permitted to display, and for each real object in the first subset of real objects, transmit virtual content associated with that real object to the first augmented reality device. 1. A method for identifying real objects in an area of interest for use in identifying virtual content a user is authorized to view using an augmented reality device , the method comprising:determining a set of real objects that are near a first position of a first augmented reality device;determining, from the set of real objects, a first subset of real objects that are associated with virtual content that the first augmented reality device is permitted to display; andfor each real object in the first subset of real objects, transmitting virtual content associated with that real object to the first augmented reality device.2. The method of claim 1 , wherein each real object in the set of real objects is near the first position when the first augmented reality device receives one or more signals containing identifiers that identify each of the real objects in the set of real objects.3. The method of claim 1 , wherein each real object in the set of real objects is near the first position when that real object is within a predefined distance from the first position.4. The method of claim 1 , wherein each real object in the set of real objects is near the first position when that real object is within a first area that includes the first position.5. The method of claim 1 , wherein each real object in the set of real objects is near the first position when a ...

Подробнее
16-01-2020 дата публикации

A NEURAL NETWORK AND METHOD OF USING A NEURAL NETWORK TO DETECT OBJECTS IN AN ENVIRONMENT

Номер: US20200019794A1
Принадлежит:

A neural network comprising at least one layer containing a set of units having an input thereto and an output therefrom, the input being arranged to have data input thereto representing an n-dimensional grid comprising a plurality of cells; the set of units within the layer being arranged to output result data to a further layer the set of units within the layer being arranged to perform a convolution operation on the input data; and wherein the convolution operation is implemented using a feature centric voting scheme applied to the non-zero cells in the input to the layer. 1. A method of detecting objects within a three dimensional environment , the method comprising using a neural network to process data representing that three dimensional environment and arranging the neural network to have at least one layer containing a set of units having an input thereto and an output therefrom , inputting data representing the environment as and n-dimensional grid comprising a plurality of cells;arranging the set of units within the layer to output result data to a further layer arranging the set of units within the layer to perform a convolution operation on the input data;arranging the convolution operation such that it is implemented using a feature centric voting scheme applied only to the non-zero cells in the input to the layer; andwherein the output from the neural network provides a confidence score as to whether an object exists within the cells of the n-dimensional grid.2. A method according to in which input data is held in a format in which data representing empty space is not stored.3. A method according to in which a network is trained to recognise a single class of object.4. A method according to in which a plurality of networks are trained claim 3 , each arranged to detect a class of object.5. A method according to in which data is input in parallel to the neural network.6. A method according to in which is arranged to maintain sparsity within intermediate ...

Подробнее
23-01-2020 дата публикации

PERSPECTIVE DISTORTION CHARACTERISTIC BASED FACIAL IMAGE AUTHENTICATION METHOD AND STORAGE AND PROCESSING DEVICE THEREOF

Номер: US20200026941A1

A perspective distortion characteristic based facial image authentication method and storage and processing device thereof are proposed. The method includes: S recognizing key points and a contour in a 2D facial image; S acquiring key points in a corresponding 3D model; S calculating camera parameters based on a correspondence between the key points in the 2D image and the key points in the 3D model; S optimizing the camera parameters based on the contour in the 2D image; S sampling the key points in the two-dimensional facial image by multiple times to obtain a camera intrinsic parameter estimation point cloud; and S calculating the inconsistency between the camera intrinsic parameter estimation point cloud and the camera nominal intrinsic parameters, and determining the authenticity of the facial image. The present disclosure can effectively authenticate the 2D image and has a relatively higher accuracy. 1. A perspective distortion characteristic based facial image authentication method comprising:{'b': '1', 'step S: recognizing key points and a contour in a two-dimensional facial image;'}{'b': '2', 'step S: acquiring key points in a three-dimensional facial model based on the three-dimensional facial model corresponding to the two-dimensional facial image;'}{'b': '3', 'step S: calculating camera parameters based on a correspondence between the key points in the two-dimensional facial image and the key points in the three-dimensional facial model;'}{'b': 4', '3, 'step S: optimizing the camera parameters obtained in step S based on the contour in the two-dimensional facial image;'}{'b': 5', '3', '4', '4, 'step S: randomly sampling the key points in the two-dimensional facial image, and repeating steps S and S until a preset loop condition is satisfied, and obtaining a camera intrinsic parameter estimation point cloud according to the camera parameters acquired in step S in each loop; and'}{'b': '6', 'step S: calculating an inconsistency between the camera intrinsic ...

Подробнее
28-01-2021 дата публикации

METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS TO GENERATE DIGITAL SCENES

Номер: US20210027044A1
Принадлежит:

Methods, systems, articles of manufacture and apparatus to generate digital scenes are disclosed. An example apparatus to generate labelled models includes a map builder to generate a three-dimensional (3D) model of an input image, a grouping classifier to identify a first zone of the 3D model corresponding to a first type of grouping classification, a human model builder to generate a quantity of placeholder human models corresponding to the first zone, a coordinate engine to assign the quantity of placeholder human models to respective coordinate locations of the first zone, the respective coordinate locations assigned based on the first type of grouping classification, a model characteristics modifier to assign characteristics associated with an aspect type to respective ones of the quantity of placeholder human models, and an annotation manager to associate the assigned characteristics as label data for respective ones of the quantity of placeholder human models. 1. An apparatus to generate labelled models , the apparatus comprising:a map builder to generate a three-dimensional (3D) model of an input image;a grouping classifier to interpret a painted region of the 3D model based on a color of a first zone of the 3D model, the color to identify a first type of grouping classification;a human model builder to generate a quantity of placeholder human models corresponding to the first zone;a coordinate engine to assign the quantity of placeholder human models to respective coordinate locations of the first zone, the respective coordinate locations assigned based on the first type of grouping classification;a model characteristics modifier to assign characteristics associated with an aspect type to respective ones of the quantity of placeholder human models; andan annotation manager to associate the assigned characteristics as label data for respective ones of the quantity of placeholder human models.2. The apparatus as defined in claim 1 , wherein the map builder is ...

Подробнее
28-01-2021 дата публикации

Sorting pistons with flaws

Номер: US20210027441A1
Принадлежит: Caterpillar Inc

A method and system for sorting pistons with flaws is disclosed. In an embodiment, a piston with flaws is three dimensionally scanned and compared to a reference image to detect the location and geometry of the flaws. The location and geometry of the flaws are recorded and used to generate a surface condition score. The pistons are sorted based on the surface condition score being higher or lower than a set threshold value.

Подробнее
31-01-2019 дата публикации

USING PHOTOGRAMMETRY TO AID IDENTIFICATION AND ASSEMBLY OF PRODUCT PARTS

Номер: US20190033072A1
Принадлежит:

A user may be aided in modifying a product that is an assemblage of parts. This aid may involve a processor obtaining images of a target part captured by the user on a mobile device camera. The processor may compare, based on the captured images and a plurality of images of identified parts, the target part to the identified parts. Based on the comparison, the processor may determine an identity of the target part. This aid may also involve a processor obtaining images of a first configuration of a partial assembly of the product captured by a mobile device camera. The processor may compare, based on the captured images, the first configuration to a correct configuration of the partial assembly. Based on the comparison, the processor may determine that the first configuration does not match the correct configuration and may notify the user accordingly. 1. A computer-implemented method for aiding a user in modifying a product , wherein the product is an assemblage of a plurality of parts , the method comprising:obtaining, by a processor, a plurality of images of a target part of the product captured by the user on a camera;generating, by the processor and based on the plurality of images of the target part, a three-dimensional model of the target part;comparing, by the processor, the target part to a plurality of identified parts by overlaying a three-dimensional model of each identified part onto the three-dimensional model of the target part; anddetermining, by the processor and based on the comparing, an identity of the target part.2. The method of claim 1 , further comprising:notifying the user of the identity of the target part.3. The method of claim 1 , further comprising:obtaining an identifier of the product, wherein the product identifier is selected by the user; andselecting, prior to the comparing and by the processor, the three-dimensional models of the identified parts from a database based on the obtained product identifier.4. The method of claim 3 , ...

Подробнее
05-02-2015 дата публикации

IMAGE PROCESSING METHOD AND SYSTEM

Номер: US20150036918A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

A method of comparing two object poses, wherein each object pose is expressed in terms of position, orientation and scale with respect to a common coordinate system, the method comprising: 2. A method of analysing image data , said method comprising:analysing said image data to obtain a plurality of predictions of the pose of an object, said predictions comprising an indication of the predicted pose of the object, the predicted pose being expressed in terms of position, orientation and scale with respect to a common coordinate system,grouping predictions together by comparing the predicted poses by calculating a distance between the two object poses, the distance being calculated using a distance function,wherein the image data comprises data of at least one object and said prediction comprises an indication of said object and its pose,wherein said indication of said object is obtained by comparing at least a part of the data with data of objects in a database, andwherein grouping the object poses comprises using a kernel density estimation method which assumes that all poses are sampled from a distribution f(X), said kernel of said kernel density estimation method comprising the said distance function.3. A method according to claim 2 , wherein each object in said database comprises a plurality of features and comparing the image data with objects in the database comprises analysing said image data to look for matches with features of objects in said database.4. A method according to claim 2 , wherein the object is a camera used to capture the image data.5. A method according to claim 2 , wherein representative poses of the groups formed by said grouping are calculated by determining the local maxima of f(X).6. A method according to claim 5 , wherein the local maxima are derived using a technique determined from mean shift claim 5 , quick shift or Medoid shift.8. A method according to claim 7 , wherein the weights claim 7 , ware derived from w=λ*K(d(Y claim 7 , X)) ...

Подробнее
24-02-2022 дата публикации

3D SEARCH ENGINE

Номер: US20220058230A1
Автор: Hassan Marwan
Принадлежит:

The present invention discloses a 3D web search engine for crawling, indexing, and searching through 3D models located on the World Wide Web. The user provides a search query in the form of a 3D model to reach for web pages that contain similar 3D models. The search query can be in the form of a 3D model, an image of a 3D model, or a drawing of a 3D model. The user has different options to filter the 3D models of the search results according to their needs or preference. 1. A search method comprising:crawling the World Wide Web to copy files representing 3D models;indexing the 3D models in a database by representing each of the 3D models with a plurality of cross-sections associated with the webpages of the 3D models; andsearching the database by providing one or more cross-sections of a search 3D model to check the one more cross-sections against the plurality of cross-sections; andpresenting the webpages associated with the plurality of cross-sections that match the one or more cross-sections.2. The search method of wherein the World Wide Web is a database available on a source other than the World Wide Web.3. The search method of wherein the 3D models are in the form of wireframe models claim 1 , surface models claim 1 , solid models or point cloud models.4. The search method of wherein the files of the 3D models have different file extensions.5. The search method of wherein the plurality of cross-sections and the one or more cross-sections are automatically generated by a software program.6. The search method of wherein the plurality of cross-sections and the one or more cross-sections are parallel to the xy claim 1 , xz claim 1 , and/or yz-planes.7. The search method of wherein the webpages are in the form of URLs or Web addresses that specifies the location of the 3D models.8. The search method of wherein the one or more cross-sections is a drawing or freehand sketch representing the search 3D model.9. The search method of wherein the one or more cross- ...

Подробнее
07-02-2019 дата публикации

METHODS AND APPARATUS TO AVOID COLLISIONS IN SHARED PHYSICAL SPACES USING UNIVERSAL MAPPING OF VIRTUAL ENVIRONMENTS

Номер: US20190043214A1
Автор: Chilcote-Bacco Derek
Принадлежит:

Methods, apparatus, systems, and articles of manufacture are disclosed. An example system for avoiding collision for virtual environment in a shared physical space includes a first mobile device associated with a first user, a first mobile device generating a first virtual environment, a second mobile device, associated with a second user, the second mobile device generating a second virtual environment and a server. The server includes an index map generator to generate a first index map and a second index map from the first virtual environment and the second virtual environment, respectively, a collision detector to determine a collision likelihood based on a comparison of the first index map and the second index map, and an object placer to, in response to the collision likelihood satisfying a threshold, modify at least one of the first virtual environment or the second virtual environment. 1. A system for avoiding collision for virtual environment in a shared physical space , the system comprising:a first mobile device associated with a first user, a first mobile device generating a first virtual environment;a second mobile device, associated with a second user, the second mobile device generating a second virtual environment; and an index map generator to generate a first index map and a second index map from the first virtual environment and the second virtual environment, respectively;', 'a collision detector to determine a collision likelihood based on a comparison of the first index map and the second index map; and', 'an object placer to, in response to the collision likelihood satisfying a threshold, modify at least one of the first virtual environment or the second virtual environment., 'a server including2. The system of claim 1 , wherein the server also includes a transceiver to communicate with the first and second mobile devices.3. The system of claim 1 , wherein the modifying at least one of the first virtual environment or the second virtual ...

Подробнее
06-02-2020 дата публикации

NATURAL LANGUAGE INTERACTION FOR SMART ASSISTANT

Номер: US20200042839A1
Принадлежит: Microsoft Technology Licensing, LLC

A method for natural language interaction includes recording speech provided by a human user. The recorded speech is translated into a machine-readable natural language input relating to an interaction topic. An interaction timer is maintained that tracks a length of time since a last machine-readable natural language input referring to the interaction topic was translated. Based on a current value of the interaction timer being greater than an interaction engagement threshold, a message relating to the interaction topic is delivered with a first natural language phrasing that includes an interaction topic reminder. Based on the current value of the interaction timer being less than the interaction engagement threshold, the message relating to the interaction topic is delivered with a second natural language phrasing that lacks the interaction topic reminder. 1. A method for natural language interaction , comprising:receiving sensor data via a network;translating the sensor data into a machine-readable natural language input relating to an interaction topic;maintaining an interaction timer tracking a length of time since a last machine-readable natural language input relating to the interaction topic;based on a current value of the interaction timer being greater than an interaction engagement threshold, outputting a message relating to the interaction topic with a first natural language phrasing that includes an interaction topic reminder; orbased on the current value of the interaction timer being less than the interaction engagement threshold, outputting the message relating to the interaction topic with a second natural language phrasing that lacks the interaction topic reminder.2. The method of claim 1 , where the first natural language phrasing includes more words than the second natural language phrasing.3. The method of claim 1 , where the interaction topic reminder includes a summary of a most recent interaction with a human user relating to the interaction ...

Подробнее
18-02-2016 дата публикации

FOOD PREPARATION

Номер: US20160048720A1
Автор: Grundy Gilman, Palmer Paul
Принадлежит:

A container for providing an enclosure for a food item includes a plurality of grading marks and a docking station to dock an electronic device. Yet further, the system includes a processor configured to take one or more pictures of the food item () using the electronic device (), transmit the one or more pictures to a cloud (), receive recommended recipes for the food item () and display the recommended recipes. 1. A container for providing an enclosure for a food item comprising: the container having at least one opening for introducing a food item into the container , and a receiving surface for the food item , wherein the container comprises a plurality of grading marks for indicating the size of the food item , and a docking station spaced from the receiving surface and the grading marks , the docking station being arranged to dock an electronic device such that the device faces the interior of the container and the grading marks.2. The container of claim 1 , wherein the container is at least partially transparent such that the food item is visible to a user.3. The container of claim 1 , wherein the container is shaped as one of a cube claim 1 , a cuboid a cone and a sphere.4. The container of claim 1 , wherein the grading marks are one of an etch claim 1 , a depression claim 1 , a raised portion claim 1 , a print claim 1 , or any other visible structure5. The container of claim 1 , wherein the container is hermitically sealable.6. The container of claim 1 , wherein the container device is dishwasher-safe.7. The container of claim 1 , comprises a weighing scale to determine weight of the food item.8. The container of claim 1 , comprises one or more sensors to perform chemical analysis of the food item.9. The container of claim 1 , the container being connectable to a communication network.10. The container of claim 1 , wherein the electronic device is one of a smart phone claim 1 , a tablet claim 1 , a camera claim 1 , a 3D scanner and a bar code scanner.11. ...

Подробнее
18-02-2016 дата публикации

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Номер: US20160049009A1
Автор: HARA Nobuyuki
Принадлежит: FUJITSU LIMITED

An image processing device includes memory; and a processor configured to execute a plurality of instructions stored in the memory, the instructions comprising: recognizing a target object recognized from a first image, which is a captured image, including the target object in a real world; controlling a second image, which is an augmented image, including information of the target object from the first image, and a third image which is an augmented image of the second image and to be formed so as to inscribe an outer surrounding the second image and covers a center of visual field of a user relative to the second image; and displaying, in a state where the user directly visually recognizes the target object in the real world, the second image and the third image such that the second image and the third image are caused to correspond to a position. 1. An image processing device comprising:a memory; anda processor configured to execute a plurality of instructions stored in the memory, the instructions comprising:recognizing a target object recognized from a first image, which is a captured image, including the target object in a real world;controlling a second image, which is an augmented image, including information of the target object from the first image, and a third image which is an augmented image of the second image and to be formed so as to inscribe an outer surrounding the second image and covers a center of visual field of a user relative to the second image; anddisplaying, in a state where the user directly visually recognizes the target object in the real world, the second image and the third image such that the second image and the third image are caused to correspond to a position of the target object in the real world.2. The device according to claim 1 ,wherein the controlling controls an outer edge of the third image based on a focal distance of the user relative to the second image and an angle relative to a vertical line of a fovea of the user ...

Подробнее
07-02-2019 дата публикации

ELECTRONIC DEVICE HAVING A VISION SYSTEM ASSEMBLY HELD BY A SELF-ALIGNING BRACKET ASSEMBLY

Номер: US20190045094A1
Принадлежит:

An electronic device that includes a vision system carried by a bracket assembly is disclosed. The vision system may include a first camera module that captures an image of an object, a light emitting element that emits light rays toward the object, and a second camera module that receives light rays reflected from the object. The light rays may include infrared light rays. The bracket assembly is designed not only carry the aforementioned modules, but to also maintain a predetermined and fixed separation between the modules. The bracket assembly may form a rigid, multi-piece bracket assembly to prevent bending, thereby maintaining the predetermined separation. The electronic device may include a transparent cover designed to couple with a housing. The transparent cover includes an alignment module designed to engage a module and provide a moving force that aligns the bracket assembly and the modules to a desired location in the housing. 1. An electronic device , comprising:an enclosure that defines an internal volume, the enclosure comprising sidewall components;a bracket assembly positioned in the internal volume, the bracket assembly lacking a direct attachment to the enclosure;a camera module carried by the bracket assembly;a transparent cover secured with the sidewall components and covering the bracket assembly, the transparent cover including a masking layer, the masking layer having a masking layer opening; andan alignment module secured to the transparent cover, wherein the alignment module aligns the camera module with the masking layer opening.2. The electronic device of claim 1 , wherein the camera module is a first camera module that is configured to capture an image of an object claim 1 , and wherein the electronic device further comprises:a processor;a light emitting module electrically coupled to the processor and carried by the bracket assembly, the light emitting module configured to emit light rays toward the object; anda second camera module ...

Подробнее
03-03-2022 дата публикации

3-d object detection and classification from imagery

Номер: US20220067342A1
Принадлежит: Covar Applied Technologies Inc

A system and method for recognizing objects in an image is described. The system can receive an image from a sensor and detect one or more objects in the image. The system can further detect one or more components of each detected object. Subsequently, the system can create a segmentation map based on the components detected for each detected object and determine whether the segmentation map matches a plurality of 3-D models (or projections thereof). Additionally, the system can display a notification through a user interface indicating whether the segmentation map matches at least one of the plurality of 3-D models.

Подробнее
25-02-2021 дата публикации

MOTION TRACKING WITH MULTIPLE 3D CAMERAS

Номер: US20210056297A1
Принадлежит:

A system comprising at least two three-dimensional (3D) cameras that are each configured to produce a digital image with a depth value for each pixel of the digital image; and a processor configured to: perform inter-camera calibration by: (i) estimating a pose of a subject, based, at least in part, on a skeleton representation of a subject captured each of by said at least two 3D cameras, wherein said skeleton representation identifies a plurality of skeletal joints of said subject, and (ii) enhancing the estimated pose based, at least in part, on a 3D point cloud of a scene containing the subject, as captured by each of said at least two 3D cameras, and perform data merging of digital images captured by said at least two 3D cameras, wherein the data merging is per each of said identifications. 1. A system comprising:at least two three-dimensional (3D) cameras that are each configured to produce a digital image with a depth value for each pixel of the digital image;at least one hardware processor; and perform inter-camera calibration by:', '(i) estimating a pose of a subject, based, at least in part, on a skeleton representation of the subject captured by each of said at least two 3D cameras, wherein said skeleton representation identifies a plurality of skeletal joints of said subject, and wherein each of said identifications has a confidence score and', '(ii) enhancing the estimated pose based, at least in part, on a 3D point cloud of a scene containing the subject, as captured by each of said at least two 3D cameras, wherein said skeleton representation identifies a plurality of skeletal joints of said subject, and', 'perform data merging of digital images captured by said at least two 3D cameras, wherein the data merging is per each of said identification., 'a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to2. (canceled)3. (canceled)4. The ...

Подробнее
22-02-2018 дата публикации

SYSTEM AND METHOD FOR 3D LOCAL SURFACE MATCHING

Номер: US20180053040A1
Автор: AL-OSAIMI Faisal R.
Принадлежит: UMM AL-QURA UNIVERSITY

A system and associated methodology for three-dimensional (3D) local surface matching that extracts a treble of 3D profiles from data corresponding to a 3D local surface wherein the treble of the 3D profiles includes a central profile and two adjacent profiles, calculates a scalar sequence pair based on the treble of the 3D profiles, calculates adjustable integral kernels based on the scalar sequence pair, and provides the adjustable integral kernels to pattern recognition applications. 1. A method for three-dimensional (3D) local surface matching , the method comprising:extracting a treble of 3D profiles from data corresponding to a 3D local surface, wherein the treble of the 3D profiles includes a central profile and two adjacent profiles;calculating a scalar sequence pair based on the treble of the 3D profiles;calculating, using processing circuitry, adjustable integral kernels based on the scalar sequence pair; andproviding the adjustable integral kernels to pattern recognition applications.2. The method of claim 1 , wherein the extracting of the central profile comprises:intersecting a first sphere with fitted bicubic functions to the 3D local surface; andsampling the central profile at an equally spaced distance using a second sphere with a smaller radius than the first sphere, wherein the center of the second sphere is located on the central profile.3. The method of claim 2 , wherein the fitted bicubic functions are determined by fitting bicubic functions to range pixel patches having a predetermined size.4. The method of claim 2 , wherein the radius of the second sphere is determined iteratively by computing a geodesic distance of the central profile and estimating a radius that gives a predetermined number of samples.5. The method of claim 4 , wherein the geodesic distance is a function of at least an Euclidean distance between samples at each iteration.6. The method of claim 1 , wherein the extracting of the two adjacent profiles comprises:determining ...

Подробнее
13-02-2020 дата публикации

Information processing apparatus and target object recognition method

Номер: US20200050833A1
Принадлежит: Sony Interactive Entertainment Inc

A captured image acquisition section 50 acquires, from an imaging apparatus 12 , data of a polarized image obtained by capturing a target object and stores the data into an image data storage section 52 . A region extraction section 60 of a target object recognition section 54 extracts a region in which a figure of the target object is included in the polarized image. A normal line distribution acquisition section 62 acquires a distribution of normal line vectors on a target object surface in regard to the extracted region. A model adjustment section 66 adjusts a three-dimensional model of the target object stored in a model data storage section 64 in a virtual three-dimensional space such that the three-dimensional model conforms to the distribution of the normal line vectors acquired from the polarized image to specify a state of the target object.

Подробнее
13-02-2020 дата публикации

Gemological object recognition

Номер: US20200050834A1
Автор: Kari Niskanen
Принадлежит: Engemma Oy

Disclosed is a system, method, and devices as system elements to recognize an object by an object recognizing system including an imaging device and a moving assembly to move the imaging device around the object, to form a certified visual model of the object to be recognized. Especially the disclosure relates to gemstone imaging by an imaging method including photographing a target, in an illumination, by a camera, to obtain at least one image of the targeted object to be recognized.

Подробнее
13-02-2020 дата публикации

Multidimensional Analysis of Gait in Rodent

Номер: US20200050840A1
Автор: Neckel Nathan
Принадлежит:

Embodiments of the present systems and methods may provide techniques for analyzing rodent gait that addresses the confound of interdependency of gait variables to provide more accurate and reproducible results. In embodiments, multidimensional analysis of gait in animals, such as rodents, may be performed. For example, in an embodiment, a computer-implemented method of animal gait analysis may comprise capturing data relating to steps taken a plurality of animal test subjects, performing a multidimensional analysis of the captured data to generate data describing a gait of the animal test subjects, and outputting data characterizing the gait of the animal test subjects. 1. A computer-implemented method of animal gait analysis , the method comprising:capturing data relating to steps taken a plurality of animal test subjects;performing a multidimensional analysis of the captured data to generate data describing a gait of the animal test subjects; andoutputting data characterizing the gait of the animal test subjects.2. The method of claim 1 , wherein the data is captured using an animal gait capture device.3. The method of claim 2 , wherein the captured data is in a world coordinate frame and the multidimensional analysis comprises:identifying initial contact, mid-stance, and toe-off data for each animal test subject in the captured data;translating and rotating the identified data from the world coordinate frame to a coordinate frame of each animal test subject;isolating the steps of each animal test subject from the translated and rotated identified data and translating a time of each step to make a time of each initial contact time zero to form a dataset; andplotting the dataset to form a representation of the animal gait analysis.4. The method of claim 3 , wherein the multidimensional analysis further comprises:determining an error of the dataset; andcomparing datasets for a plurality of groups, each group comprising a trial of a plurality of animal test subjects ...

Подробнее
13-02-2020 дата публикации

2D/3D REGISTRATION FOR ABDOMINAL AORTIC ANEURYSM INTERVENTION

Номер: US20200051258A1
Принадлежит: Siemens Corporation

A method for performing 2D/3D registration includes acquiring a 3D image. A pre-contrast 2D image is acquired. A sequence of post-contrast 2D images is acquired. A 2D image is acquired from a second view. The first view pre-contrast 2D image is subtracted from each of the first view post-contrast 2D images to produce a set of subtraction images. An MO image is generated from the subtraction images. A 2D/3D registration result is generated by optimizing a measure of similarity between a first synthetic 2D image and the MO image and a measure of similarity between a second synthetic image and the intra-operative 2D image from the second view by iteratively adjusting an approximation of the pose of the patient in the synthetic images and iterating the synthetic images using the adjusted approximation of the pose. 1. A method for performing 2D/3D registration , comprising:acquiring a pre-operative 3D image of a patient;acquiring an intra-operative pre-contrast 2D image of the patient from a first view;administering a radiocontrast agent to the patient;acquiring a sequence of intra-operative post-contrast 2D images of the patient from the first view;acquiring an intra-operative 2D image of the patient from a second view that is acquired at a different angle with respect to the patient than the first view;subtracting the first view pre-contrast 2D image from each of the first view post-contrast 2D images to produce a set of first view subtraction images;generating a maximum opacity (MO) image from the set of first view subtraction images;generating a first synthetic 2D view from the pre-operative 3D image that approximates the first view based on an initial approximation of an intra-operative pose of the patient;generating a second synthetic 2D view from the pre-operative 3D image that approximates the second view based on the initial approximation of the intra-operative pose of the patient; andgenerating a 2D/3D registration result by optimizing a measure of similarity ...

Подробнее
21-02-2019 дата публикации

DYNAMIC CONTENT GENERATION FOR AUGMENTED REALITY ASSISTED TECHNOLOGY SUPPORT

Номер: US20190056779A1
Принадлежит:

Embodiments of the present invention provide methods for generating an augmented reality experience based on Knowledge Media. One method can include receiving one or more Knowledge Media. Transforming the one or more Knowledge Media into consumable steps. Extracting hardware information from the one or more Knowledge Media. Generating a three-dimensional point cloud model of the hardware based on the one or more Knowledge Media, and outputting an augmented reality experience based on an annotated three-dimensional point cloud model. 1. A method for generating an augmented reality experience based on Knowledge Media , the method comprising:receiving, by one or more processors, one or more Knowledge Media;transforming, by the one or more processors, the one or more Knowledge Media into consumable steps;extracting, by the one or more processors, hardware information from the one or more Knowledge Media;generating, by the one or more processors, a three-dimensional point cloud model of the hardware based on the one or more Knowledge Media; andoutputting, by the one or more processors, an augmented reality experience based on an annotated three-dimensional point cloud model.2. The method of claim 1 , wherein the method further comprises:displaying, the one or more processors, the augmented reality experience.3. The method of claim 2 , wherein the displaying further comprises:associating, by the one or more processors, the hardware with pixels in a video, wherein the association enables a user to interact with the associated hardware through augmented reality, wherein the interaction is based on the consumable steps.4. The method of claim 1 , extracting hardware information further comprises:retrieving, by the one or more processors, hardware information from at least one of previously transformed consumable steps, or Knowledge Media from shared storage.5. The method of claim 1 , wherein outputting an augmented reality experience further comprises:annotating, by the one ...

Подробнее
01-03-2018 дата публикации

METHODS, APPARATUS, COMPUTER PROGRAMS, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUMS FOR PROCESSING DATA FROM A SENSOR

Номер: US20180060646A1
Принадлежит: Rolls--Royce plc

A method of processing data from a sensor, the method comprising: receiving first data; identifying an object by comparing the received first data with stored second data for a plurality of objects; determining a first processing strategy for one or more portions of the object in third data using the identification of the object, the third data being received from a sensor; and processing the determined one or more portions of the object in the third data using the determined first processing strategy. 1. A method of processing data from a sensor , the method comprising:receiving first data;identifying an object by comparing the received first data with stored second data for a plurality of objects;determining a first processing strategy for one or more portions of the object in third data using the identification of the object, the third data being received from a sensor; andprocessing the determined one or more portions of the object in the third data using the determined first processing strategy.2. A method as claimed in claim 1 , further comprising controlling the sensor to scan the object to generate the first data.3. A method as claimed in claim 1 , wherein the third data is the same data as the first data.4. A method as claimed in claim 1 , further comprising controlling a sensor to scan the object to generate the third data claim 1 , the third data being different data to the first data.5. A method as claimed in claim 1 , wherein the first processing strategy includes processing the determined one or more portions of the object in the third data to generate fourth data.6. A method as claimed in claim 5 , wherein the fourth data is a subset of the third data and only includes data for the one or more portions of the object.7. A method as claimed in claim 5 , wherein the fourth data is in a different format to the third data.8. A method as claimed in claim 5 , wherein the third data is point cloud data and the fourth data is geometry model mesh data.9. A ...

Подробнее
04-03-2021 дата публикации

SCANNING ENVIRONMENTS AND TRACKING UNMANNED AERIAL VEHICLES

Номер: US20210064024A1
Принадлежит:

Systems and methods for scanning environments and tracking unmanned aerial vehicles within the scanned environments are disclosed. A method in accordance with a particular embodiment includes using a rangefinder off-board an unmanned air vehicle (UAV) to identify points in a region. The method can further include forming a computer-based map of the region with the points and using the rangefinder and a camera to locate the UAV as it moves in the region. The location of the UAV can be compared with locations on the computer-based map and, based upon the comparison, the method can include transmitting guidance information to the UAV. In a further particular embodiment, two-dimensional imaging data is used in addition to the rangefinder data to provide color information to points in the region. 122.-. (canceled)23. A non-transitory computer-readable storage medium storing instructions that , if executed by a computing system having a memory and a processor , cause the computing system to perform a method , the method comprising:providing information characterizing a visual representation of a three-dimensional model of the environment, the environment including a plurality of objects;receiving input related to a proposed flight path for an unmanned aerial vehicle through the environment, wherein the input includes one or more regions of interest;receiving a predicted trajectory of the unmanned aerial vehicle through the environment, with the predicted trajectory calculated based at least in part on the received input, one or more characteristics for pointing a sensor of the unmanned aerial vehicle toward the one or more regions of interest, and one or more flight characteristics of the unmanned aerial vehicle;determining whether the predicted trajectory of the unmanned aerial vehicle through the environment is free of collisions;if the predicted trajectory is determined to be not free of collisions, revising the predicted trajectory to avoid a collision; andproviding ...

Подробнее
17-03-2022 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, INSPECTION APPARATUS, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM

Номер: US20220084188A1
Автор: Onishi Hiroyuki
Принадлежит:

A first acquisition unit acquires three-dimensional model information related to a three-dimensional model of an inspection object and inspection region information related to an inspection region in the three-dimensional model. A second acquisition unit acquires position attitude information regarding a position and an attitude of an imaging unit and the inspection object in an inspection apparatus. A designation unit creates region designation information for designating an inspection image region corresponding to the inspection region for a captured image that can be acquired by imaging of the inspection object by the imaging unit based on the three-dimensional model information, the inspection region information, and the position attitude information. 1. An image processing apparatus comprising:a first acquisition unit configured to acquire three-dimensional model information related to a three-dimensional model of an inspection object and inspection region information related to an inspection region in the three-dimensional model;a second acquisition unit configured to acquire position attitude information regarding a position and an attitude of an imaging unit and said inspection object in an inspection apparatus; anda designation unit configured to create region designation information for designating an inspection image region corresponding to said inspection region for a captured image that can be acquired by imaging of said inspection object by said imaging unit, based on said three-dimensional model information, said inspection region information, and said position attitude information.2. The image processing apparatus according to claim 1 , wherein said first acquisition unit is configured to acquire said inspection region information by dividing a surface of said three-dimensional model into a plurality of regions based on information related to orientations of a plurality of planes constituting said three-dimensional model.3. The image processing ...

Подробнее
28-02-2019 дата публикации

SEPARATION OF OBJECTS IN IMAGES FROM THREE-DIMENSIONAL CAMERAS

Номер: US20190065823A1
Принадлежит:

Methods, systems, and programs are presented for simultaneous recognition of objects within a detection space utilizing three-dimensional (3D) cameras configured for capturing 3D images of the detection space. One system includes the 3D cameras, calibrated based on a pattern in a surface of the detection space, a memory, and a processor. The processor combines data of the 3D images to obtain pixel data and removes, from the pixel data, background pixels of the detection space to obtain object pixel data associated with objects in the detection space. Further, the processor creates a geometric model of the object pixel data, the geometric model including surface information of the objects in the detection space, generates one or more cuts in the geometric model to separate objects and obtain respective object geometric models, and performs object recognition to identify each object in the detection space based on the respective object geometric models. 1. A method comprising:calibrating, by one or more processors, a plurality of three-dimensional (3D) cameras based on a pattern in a surface;capturing, by the plurality of 3D cameras, 3D images for recognizing objects when present in an object-detection space;combining data, by the one or more processors, of the captured 3D images to obtain pixel data of the object-detection space;removing, by the one or more processors, from the pixel data, background pixels of a background in the object-detection space to obtain object pixel data associated with the objects in the object-detection space;creating, by the one or more processors, a geometric model of the object pixel data, the geometric model including surface information of the objects in the object-detection space;generating, by the one or more processors, one or more cuts in the geometric model to separate objects and obtain respective object geometric models; andperforming, by the one or more processors, object recognition to identify each object in the object- ...

Подробнее
28-02-2019 дата публикации

SPATIAL DATA ANALYSIS

Номер: US20190065824A1
Принадлежит: Fugro N.V.

The spatial data analysis system for processing spatial data comprises a statistical analysis module () and a convolutional neural network (). The statistical analysis module () calculates a discrete two-dimensional spatial distribution (V(k,l)) of at least one statistical measure derived from said spatial data. The spatial distribution defines a statistical measure value of one or more statistical measure for respective raster elements (R(k,l)) in a two-dimensional raster for the data elements derived from the spatial data in the spatial window associated with the raster element. The convolutional neural network () is configured to provide object information of objects based on the statistical data. 1. A spatial data analysis system for analysis of spatial data comprising a set of spatial data points each being characterized at least by their coordinates in a three-dimensional coordinate system , the system comprising a convolutional neural network , to receive input data and being configured to provide object information about objects identified in the spatial data by the spatial data analysis system , characterized in that the spatial data analysis system further comprises a statistical analysis module having an input to receive data elements having a data element position with coordinates in a two-dimensional coordinate system and a data element value for said data element position derived from the coordinates of respective spatial data points , and having a computation facility to calculate a discrete spatial distribution of at least one statistical measure , said spatial distribution defining a statistical measure value of said at least one statistical measure for respective raster elements in a two-dimensional raster , each raster element being associated with a respective spatial window comprising a respective subset of said data elements , said statistical analysis module calculating the statistical measure value for a raster element from the respective ...

Подробнее
10-03-2016 дата публикации

Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction

Номер: US20160071318A1
Автор: Lee Ken, Yin Jun
Принадлежит:

Methods and systems are described for generating a three-dimensional (3D) model of an object represented in a scene. A computing device receives a plurality of images captured by a sensor, each image depicting a scene containing physical objects and at least one object moving and/or rotating. The computing device generates a scan of each image comprising a point cloud corresponding to the scene and objects. The computing device removes one or more flat surfaces from each point cloud and crops one or more outlier points from the point cloud after the flat surfaces are removed using a determined boundary of the object to generate a filtered point cloud of the object. The computing device generates an updated 3D model of the object based upon the filtered point cloud and an in-process 3D model, and updates the determined boundary of the object based upon the filtered point cloud. 1. A computerized method for generating a three-dimensional (3D) model of an object represented in a scene , the method comprising:receiving, by an image processing module executing on a processor of a computing device, a plurality of images captured by a sensor coupled to the computing device, each image depicting a scene containing one or more physical objects, wherein at least one of the objects moves and/or rotates between capture of different images;generating, by the image processing module, a scan of each image comprising a 3D point cloud corresponding to the scene and objects;removing, by the image processing module, one or more flat surfaces from each 3D point cloud and cropping one or more outlier points from the 3D point cloud after the flat surfaces are removed using a determined boundary of the object to generate a filtered 3D point cloud of the object;generating, by the image processing module, an updated 3D model of the object based upon the filtered 3D point cloud and an in-process 3D model; andupdating, by the image processing module, the determined boundary of the object ...

Подробнее
08-03-2018 дата публикации

APPARATUS AND METHOD FOR REJECTING ERRONEOUS OBJECTS OF EXTRUSION IN POINT CLOUD DATA

Номер: US20180068169A1
Принадлежит:

A method of rejecting the presence of an object of extrusion within a point cloud. The method comprising receiving, through a data interface, data describing a set of measurements of observed portions of the one or more objects in the scene. The method further comprising receiving data describing an extruded object that is hypothesized to exist in the scene. The method further comprising finding a set of near measurement points comprising measurement points wherein each measurement point is within a predefined distance of the hypothesized extruded object. The method further comprising classifying points within the set of near measurement points associated with the hypothesized extruded object as on-surface or off-surface. The method further comprising rejecting the hypothesized extruded object whose off-surface measurement points exceed an allowable threshold. 1. A method of rejecting the presence of an object of extrusion within a point cloud , said method comprising the steps of:receiving, through a data interface, data describing a set of measurements of observed portions of the one or more objects in the scene;receiving data describing an extruded object that is hypothesized to exist in the scene;finding a set of near measurement points comprising measurement points wherein each measurement point is within a predefined distance of the hypothesized extruded object;classifying points within the set of near measurement points associated with the hypothesized extruded object as on-surface or off-surface; andrejecting a hypothesized extruded object whose ratio of off-surface measurement points to on-surface measurement points exceeds an allowable threshold.2. The method of claim 1 , where a feature extraction algorithm is used to produce the data describing the hypothesized extruded object.3. The method of claim 1 , where the data describing the hypothesized extruded object describes a cylinder.4. The method of claim 1 , where the allowable threshold is chosen based ...

Подробнее
08-03-2018 дата публикации

METHOD FOR THE RECOGNITION OF RAISED CHARACTERS, CORRESPONDING COMPUTER PROGRAM AND DEVICE

Номер: US20180068196A1
Принадлежит:

A method of character recognition is implemented by an electronic device to recognize, in an image representing an object comprising at least one raised character, called a basic image, at least one raised character of the basic image. The method includes: a phase of processing at least one image, the phase including at least one implementation of a Phong reflection module and delivering at least one identification image; and a phase of identifying characters as a function of the basic image and the at least one identification image. 3. The method according to claim 1 , wherein the raised characters are situated on a bank card in said basic image.4. The method according to claim 3 , wherein the method comprises a phase of preliminary processing of an image of the bank card captured by a camera claim 3 , delivering said basic image.5. The method according to claim 4 , wherein said phase of preliminary processing comprises:detecting edges of the bank card, delivering an intermediate image;selecting lines in the intermediate image;computing intersections of the lines, delivering four corners of the bank card; andconverting the intermediate image, so that the four corners coincide with four corners of a rectangle, delivering said basic image.7. The method according to claim 1 , wherein said phase of identifying said characters comprises:sub-dividing said normal map, delivering a list of sections of said normal map, each section corresponding to a character in said at least one zone;building a list of current vectors from said list of sections, each current vector representing a section;comparing said current vectors with a list of model vectors preliminarily built on the basis of normal mappings of the known characters, each model vector corresponding to a known character;determining a list of current characters corresponding to said list of current vectors.8. The method according to claim 7 , wherein said current vector and said model vectors have m×n dimensions claim ...

Подробнее
27-02-2020 дата публикации

NAVIGATING AMONG IMAGES OF AN OBJECT IN 3D SPACE

Номер: US20200065558A1
Принадлежит:

A three-dimensional model of an object is employed to aid in navigation among a number of images of the object taken from various viewpoints. In general, an image of an object such as a digital photograph is displayed in a user interface or the like. When a user selects a point within the display that corresponds to a location on the surface of the object, another image may be identified that provides a better view of the object. In order to maintain user orientation to the subject matter while navigating to this destination viewpoint, the display may switch to a model view and a fly-over to the destination viewpoint may be animated using the model. When the destination viewpoint is reached, the display may return to an image view for further inspection, marking, or other manipulation by the user. 1. A method of navigating among a number of images taken of an object , the method comprising:displaying a first image of an object, the first image selected from a number of images taken of the object, the first image showing a surface of the object from a first viewpoint;receiving a selection of a location on the surface of the object;selecting a second image of the object from the number of images taken of the object, the second image selected to provide an improved view of the location on the surface of the object from a second viewpoint;rendering an animation of a spatial transition from the first viewpoint to the second viewpoint using a three-dimensional model of the object;displaying the animation; anddisplaying the second image upon reaching the second viewpoint in the animation.2. The method of wherein receiving the selection of the location includes receiving the selection from within a graphical user interface.3. The method of wherein receiving the selection includes at least one of a mouse input and a touch screen input.4. The method of wherein the three-dimensional model includes a texture map that is derived from at least one image of the number of images.5. ...

Подробнее
27-02-2020 дата публикации

Method for determining pose and for identifying a three-dimensional view of a face

Номер: US20200065564A1
Автор: Sami Romdhani
Принадлежит: Idemia Identity and Security France SAS

The invention also relates to a method for checking the identity of an individual, using a frontalized shot of their face obtained via the method for face detection and determination of pose applied to a not necessarily frontalized shot.

Подробнее
27-02-2020 дата публикации

METHOD AND APPARATUS FOR TRAINING OBJECT DETECTION MODEL

Номер: US20200066036A1
Автор: CHOI Hee-min
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An object detection training method and apparatus are provided. The object detection training apparatus determines a pose and a dimension of an object, and a bounding box at various viewpoints from an input image based on an object detection model, and trains the object detection model based on a loss. 1. An object detection training method comprising:estimating a pose and a dimension of an object based on a feature extracted from an input image, using an object detection model;calculating a three-dimensional (3D) bounding box from the pose and the dimension;determining a first output bounding box corresponding to the object by projecting the 3D bounding box to a first projection image;determining a second output bounding box corresponding to the object by projecting the 3D bounding box to a second projection image; andtraining the object detection model based on the pose, the dimension, the first output bounding box and the second output bounding box, the training comprising a fusion operation over the first output bounding box and the second output bounding box.2. The object detection training method of claim 1 , wherein the determining of the first output bounding box comprises determining a bird's eye view bounding box corresponding to the object by projecting the 3D bounding box to a bird's eye view projection image.3. The object detection training method of claim 1 , wherein the determining of the second output bounding box comprises determining a perspective bounding box corresponding to the object by projecting the 3D bounding box to a perspective projection image.4. The object detection training method of claim 1 , wherein the estimating of the pose and the dimension of the object comprises extracting features from i) a two-dimensional (2D) bounding box corresponding to the object detected from the input image and ii) a crop image corresponding to the 2D bounding box.5. The object detection training method of claim 1 , wherein the training of the object ...

Подробнее
15-03-2018 дата публикации

IDENTIFYING A VEHICLE USING A MOBILE DEVICE

Номер: US20180075287A1
Автор: Elswick Richard L.
Принадлежит:

A communication system and a method of identifying a target vehicle using application software on a mobile device. The method includes the steps of receiving at the mobile device a wireless signal transmitted from a transmitter at the target vehicle, wherein the wireless signal is received using an imaging detector in the mobile device; identifying that the target vehicle is a source of the wireless signal based on characteristics of the wireless signal; and in response to the identification, displaying to a user on a display of the mobile device an indication of an identity of the target vehicle. 1. A method of identifying a target vehicle using application software on a mobile device , the method comprising the steps of:receiving at the mobile device a wireless signal transmitted from a transmitter at the target vehicle, wherein the wireless signal is received using an imaging detector in the mobile device;identifying that the target vehicle is a source of the wireless signal based on characteristics of the wireless signal; andin response to the identification, displaying to a user on a display of the mobile device an indication of an identity of the target vehicle.2. The method of claim 1 , wherein the wireless signal is comprised of light claim 1 , wherein the mobile device receives the wireless signal when the mobile device is within a line-of-sight (LOS) of the transmitter.3. The method of claim 1 , wherein the characteristics of the wireless signal include a predetermined sequence of light pulses.4. The method of claim 3 , wherein the predetermined sequence of light pulses are representative of a mobile device identifier provided by the mobile device to a remote server.5. The method of claim 1 , wherein the characteristics of the wireless signal comprise encoded claim 1 , infrared (IR) light pulses.6. The method of claim 1 , wherein the displaying step further comprises displaying video image data that includes an image of the target vehicle and a computer- ...

Подробнее
15-03-2018 дата публикации

Wayfinding and Obstacle Avoidance System

Номер: US20180075302A1
Принадлежит:

A wayfinding and obstacle avoidance system for assisting the visually impaired with wayfinding and obstacle avoidance. The wayfinding and obstacle avoidance system generally includes a processor, a memory, which stores program instructions, and potentially an area map. The wayfinding device is generally configured to receive pose data and depth data from a provider in order to determine the position of one or more objects within the field of view of the wayfinding device. This determination is usually made using interpreters, that may include a memoryless interpreter, a persistent interpreter, and a system interpreter. Interpreters will generally provide this information to a user via one or more presenters. This feedback may include, but is not limited to visual feedback, audio feedback, and haptic feedback. 1. A wayfinding and obstacle avoidance device , comprisinga means for obtaining depth data indicating the distance and direction of a plurality of points within a field of view of the device;a means for providing sensory feedback;a processor; and acquire a point cloud, wherein the point cloud comprises a plurality of points indicating the position t of the plurality of points relative to a plane of reference;', 'group pluralities of points in the point cloud that are in close proximity to each other;', 'reject groups containing a number of points below a threshold;', 'categorize any non-rejected groups as at least part of an object; and', 'using the means for providing sensory feedback, produce a sensory representation of the presence of at least one object within the field of view of the device., 'a memory comprising program instructions that when executed by the processor cause the device to2. The wayfinding and obstacle avoidance device of claim 1 , wherein the means for obtaining depth data comprises:an infrared emitter; andan infrared sensor; and emit infrared light using the infrared emitter;', 'detect reflections of the infrared light using the infrared ...

Подробнее
05-03-2020 дата публикации

Method and apparatus for subject identification

Номер: US20200074153A1
Автор: Allen Yang Yang
Принадлежит: Atheer Inc

Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.

Подробнее
18-03-2021 дата публикации

Method and System for Hand Pose Detection

Номер: US20210081055A1
Принадлежит:

A method for hand pose identification in an automated system includes providing depth map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model. 1. A system for computer human interaction comprising:a depth camera configured to generate depth map data of a hand of a user;an output device;a memory storing at least a first neural network, and a recommendation engine; and receive depth map data of a hand of a user from the depth camera;', 'generate, using the first neural network to generate a first plurality of activation features base at least in part on the depth map data;', 'perform a first search in a predetermined plurality of activation features stored in a database stored in the memory to identify a first plurality of hand pose parameters for the wrist using nearest neighbor identification;', 'generate a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters; and', 'generate an output with the output device in response to input from the user based at least in part on the hand pose model., 'a processor operatively connected to the depth camera, the output device, and the memory, the processor being configured to2. The system of claim 1 , wherein the processor is further configured to:identify a second ...

Подробнее
22-03-2018 дата публикации

INFORMATION PROCESSING APPARATUS, OBJECT RECOGNITION APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM

Номер: US20180082106A1
Принадлежит:

An information processing apparatus comprises an image generation unit configured to generate, based on a first image in which a transparent object having transparency is captured and a second image in which a target object is captured, a reproduced image in which the target object which is at least partially covered by the transparent object is reproduced; and a creation unit configured to create, based on the reproduced image, a model for recognizing the target object which is at least partially covered by the transparent object. 1. An information processing apparatus comprising:an image generation unit configured to generate, based on a first image in which a transparent object having transparency is captured and a second image in which a target object is captured, a reproduced image in which the target object which is at least partially covered by the transparent object is reproduced; anda creation unit configured to create, based on the reproduced image, a model for recognizing the target object which is at least partially covered by the transparent object.2. The information processing apparatus according to claim 1 , wherein the image generation unit generates the reproduced image by claim 1 , for a pixel whose luminance value in the first image is greater than or equal to a threshold claim 1 , setting the luminance value of that pixel as a corresponding luminance value in the reproduced image.3. The information processing apparatus according to claim 1 , further comprising a normal obtainment unit configured to obtain information of a normal direction of each pixel from a depth image in which a shape of the target object is obtained claim 1 , whereinthe image generation unit generates the reproduced image based on the first image, the second image, and the information of the normal direction.4. The information processing apparatus according to claim 1 , further comprising an image determination unit configured to determine a multiformity of the first image ...

Подробнее
22-03-2018 дата публикации

SYSTEMS, DEVICES, AND METHODS FOR THREE-DIMENSIONAL ANALYSIS OF EYEBAGS

Номер: US20180082108A1
Принадлежит:

In some embodiments of the present disclosure, a system for processing three-dimensional face scan data is provided. A three-dimensional scanner produces an image of a face including an area of interest that includes an eyebag area. A profile of the eyebag area is determined by the system. In some embodiments, the profile is determined based on a vertical slice at the center of the eyebag area. Profiles for multiple sets of scan data may be compared to determine quantitative differences between eyebag profiles. These differences may be used for quantitatively comparing the effects of products applied to the eyebag area between scans. These differences may also be used for predictively generating three-dimensional models to illustrate predicted effects of the use of a product on a face. 1. A computer-implemented method of processing three-dimensional face scan data , the method comprising:receiving, by a facial analysis device, first face scan data representing a three-dimensional scan of a face;determining, by the facial analysis device, a first model of an eyebag area of the first face scan data;determining, by the facial analysis device, a first score based on the first model; andstoring by the facial analysis device, the first score in a scan data store.2. The method of claim 1 , further comprising:comparing, by the facial analysis device, the first model to at least one stored model of previous face scan data representing a previous three-dimensional scan of the face to determine differences between the models; andstoring, by the facial analysis device, the determined differences in the scan data store.3. The method of claim 2 , further comprising:receiving, by the facial analysis device, new face scan data representing a three-dimensional scan of a new face;determining, by the facial analysis device, a new model of an eyebag area of the new face scan data;determining, by the facial analysis device, predicted face scan data using the new model and the determined ...

Подробнее
25-03-2021 дата публикации

OBJECT LOCATION ANALYSIS

Номер: US20210086358A1
Автор: Northcutt Brandon
Принадлежит: Toyota Research Institute, Inc.

A method for controlling a robotic device based on observed object locations is presented. The method includes observing objects in an environment. The method also includes generating a probability distribution for locations of the observed objects. The method further includes controlling the robotic device to perform an action in the environment based on the generated probability distribution. 1. A method for controlling a robotic device based on observed object locations , comprising:observing objects in an environment;generating a probability distribution for locations of the observed objects; andcontrolling the robotic device to perform an action in the environment based on the generated probability distribution.2. The method of claim 1 , further comprising observing the objects over a period of time.3. The method of claim 2 , further comprising estimating a continuous distribution using the observations of the objects over the period of time.4. The method of claim 3 , in which the probability distribution is based on the continuous distribution.5. The method of claim 1 , further comprising:generating a cost map from the probability distribution;overlaying the cost map on the environment; andcontrolling the robotic device based on the cost map.6. The method of claim 5 , further comprising claim 5 , controlling the robot to:avoid a first area in the environment with a first object probability that is greater than a first threshold; ornavigate to a second area in the environment with a second object probability that is greater than a second threshold.7. The method of claim 1 , further comprising controlling the robotic device to place an object in the environment based on the probability distribution.8. An apparatus for controlling a robotic device based on observed object locations claim 1 , the apparatus comprising:a memory; and to observe objects in an environment;', 'to generate a probability distribution for locations of the observed objects; and', 'to ...

Подробнее
12-03-2020 дата публикации

METHODS AND APPARATUS FOR TESTING MULTIPLE FIELDS FOR MACHINE VISION

Номер: US20200082230A1
Принадлежит: Cognex Corporation

The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a three-dimensional model. A three-dimensional model is stored, the three dimensional model comprising a set of probes. Three-dimensional data of an object is received, the three-dimensional data comprising a set of data entries. The three-dimensional data is converted into a set of fields, comprising generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries, and generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic. A pose of the three-dimensional model is tested with the set of fields, comprising testing the set of probes to the set of fields, to determine a score for the pose. 1. A computerized method for testing a pose of a three-dimensional model , the method comprising:storing a three-dimensional model, the three dimensional model comprising a set of probes;receiving three-dimensional data of an object, the three-dimensional data comprising a set of data entries; generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries; and', 'generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic; and, 'converting the three-dimensional data into a set of fields, ...

Подробнее
25-03-2021 дата публикации

SYSTEMS AND METHODS FOR ADJUSTING STOCK EYEWEAR FRAMES USING A 3D SCAN OF FACIAL FEATURES

Номер: US20210088811A1
Принадлежит:

Systems and methods are disclosed for generating a 3D computer model of an eyewear product, using a computer system, the method including obtaining an inventory comprising a plurality of product frames; scanning a user's anatomy; extracting measurements of the user's anatomy; obtaining a first model of a contour and/or surface of the user's anatomy, based on the extracted measurements of the user's anatomy; identifying, based on the contour and/or the surface of the user's anatomy, a first product frame among the plurality of product frames; determining adjustments to the first product frame based on the contour and/or the surface of the user's anatomy; generating a second model rendering comprising the adjusted first product frame matching the contours and/or the surface of the user's anatomy. 1. A method of generating instructions for adjusting and previewing stock eyewear frames , the method comprising:receiving 3D scans and/or 3D CAD files of a plurality of eyewear frames;obtaining a 3D scan and/or images of an individual's face;extracting face measurements of the individual's face from the 3D scan and/or images;calculating fit parameters based on the extracted face measurements of the individual's face and 3D scans and/or 3D CAD files of the plurality of frames;identifying a filtered subset of the plurality of frames that satisfy the calculated fit parameters based on aesthetic, fit, adjustability, and/or optical constraints;selecting or receiving a selection of one of the filtered subset of frames that satisfy the calculated fit parameters;adjusting a 3D frame model of the selected frames based on the individual's extracted face measurements, according to one or more aesthetic, fit, adjustability, and/or optical constraints;solving for 3D position of wear lens measurements associated with the 3D frame model relative to the individual's extracted face measurements;previewing the adjusted 3D frame model over images and/or a 3D scan of the individual's face based ...

Подробнее
30-03-2017 дата публикации

FACILITATING DYNAMIC MONITORING OF BODY DIMENSIONS OVER PERIODS OF TIME BASED ON THREE-DIMENSIONAL DEPTH AND DISPARITY

Номер: US20170087415A1
Принадлежит: Intel Corporation

A mechanism is described for facilitating smart monitoring of body dimensions according to one embodiment. A method of embodiments, as described herein, includes receiving a first request to take a first picture of a user, wherein the first picture is taken at a first point in time using a depth-sensing camera; automatically computing first body dimensions relating to a body of the user based on at least one of a first image of the body and first depth information relating to one or more parts of the body, wherein the first image and the first depth information are obtained from the first picture; generate a first three-dimensional (3D) model of the body based on the first body dimensions; and communicating at least one of the first 3D model and the first body dimensions to a display device, wherein the display device to display at least one of the first 3D model and the first body dimensions. 1. An apparatus comprising:detection/reception logic to receive a first request to take a first picture of a user, wherein the first picture is taken at a first point in time using a depth-sensing camera;analysis/model computation logic to automatically compute first body dimensions relating to a body of the user based on at least one of a first image of the body and first depth information relating to one or more parts of the body, wherein the first image and the first depth information are obtained from the first picture, wherein the analysis/model computation logic is further to generate a first three-dimensional (3D) model of the body based on the first body dimensions; andcommunication/compatibility logic to communicate at least one of the first 3D model and the first body dimensions to a display device, wherein the display device to display at least one of the first 3D model and the first body dimensions.2. The apparatus of claim 1 , wherein the detection/reception logic is further to receive a second request to take second picture of the user claim 1 , wherein the ...

Подробнее
25-03-2021 дата публикации

APPARATUS, SYSTEMS and METHODS FOR PROVIDING THREE-DIMENSIONAL INSTRUCTION MANUALS IN A SIMPLIFIED MANNER

Номер: US20210089597A1
Принадлежит:

Interactive, electronic guides for an object may include one or more 3D models, and one or more associated tasks, such as how to assemble, operate, or repair an aspect of the object. A user electronic device may scan an encoded tag on the object, and transmit the scan data to an electronic guide distribution server. The server may receive an electronic guide generated by an electronic guide generator having a 3D model repository and a task repository, the guide associated with the encoded tag. Guide managers may add or modify 3D models and/or tasks to broaden the available guides, and tag producers may generate encoded tags using new and/or modified 3D models and tasks and apply tags to objects. 1. A system for disseminating electronic guides , the system comprising:an electronic guide generator having (1) a 3D model repository with a plurality of 3D models, each 3D model associated with an encoded tag, and (2) a task repository having a plurality of tasks, each task associated with the encoded tag, the electronic guide generator configured to generate an electronic guide having at least one 3D model from the 3D model repository and at least one task from the task repository in response to receiving a scan data;a plurality of user electronic devices, each user electronic device having (1) a scanner configured to scan an encoded tag on an object to generate the scan data including data associated with the encoded tag, (2) a scan data transmitter configured to transmit scan data, (3) an electronic guide receiver configured to receive an electronic guide, and (4) a display configured to display the received electronic guide;an electronic guide distribution server in electronic communication with the electronic guide generator and the plurality of user electronic devices, and configured to (1) receive a scan data from a user electronic device, (2) transmit the scan data to the electronic guide generator, (3) receive a generated electronic guide from the electronic guide ...

Подробнее
25-03-2021 дата публикации

TEXTURED PRINTING

Номер: US20210090283A1
Принадлежит: KYOCERA DOCUMENT SOLUTIONS, INC.

Methods relating generally to textured printing are disclosed. In a method, at least one object or object outline in an image is identified using an artificial intelligence engine. A sub image is generated for the at least one object or object outline. The image and the sub image are processed to convert into image information and associated position information for the sub image in relation to the image for textured printing. The image information and the position information are stored in a memory for the textured printing. 1. A method , comprising:identifying at least one object in an image using an artificial intelligence engine;generating a sub image for the at least one object;processing the image and the sub image to convert into image information and associated position information for the sub image in relation to the image for textured printing; andstoring in a memory the image information and the position information for the textured printing.2. The method according to claim 1 , further comprising printing with a printer:the image as background; andthe sub image over the background including the at least one object located with the position information to provide a distinctive raised texture of the sub image with respect to a remainder of the image.3. The method according to claim 2 , further comprising:recognizing the at least one object by the artificial intelligence engine; andselecting a texture for the textured printing of the sub image responsive to recognition of the at least one object.4. The method according to claim 2 , wherein:the printer is a multi-function printer; andthe image is obtained by scanning by the printer.5. The method according to claim 2 , wherein the printer is configured with the artificial intelligence engine.6. The method according to claim 2 , further comprising:communicating the image to a cloud-based backend application including the artificial intelligence engine; andsending the image information and the position ...

Подробнее
31-03-2016 дата публикации

SCHEMES FOR RETRIEVING AND ASSOCIATING CONTENT ITEMS WITH REAL-WORLD OBJECTS USING AUGMENTED REALITY AND OBJECT RECOGNITION

Номер: US20160093106A1
Автор: Black Glenn
Принадлежит:

A method includes identifying a real-world object in a scene viewed by a camera of a user device, matching the real-world object with a tagged object based at least in part on image recognition and a sharing setting of the tagged object, the tagged object having been tagged with a content item, providing a notification to a user of the user device that the content item is associated with the real-world object, receiving a request from the user for the content item, and providing the content item to the user. A computer readable storage medium stores one or more computer programs, and an apparatus includes a processor-based device. 1. A method comprising:identifying a real-world object in a scene viewed by a camera of a user device;matching the real-world object with a tagged object based at least in part on image recognition and a sharing setting of the tagged object, the tagged object having been tagged with a content item;providing a notification to a user of the user device that the content item is associated with the real-world object;receiving a request from the user for the content item; andproviding the content item to the user.2. The method of claim 1 , wherein the sharing setting comprises whether the tagged object will be matched with only with an image of the tagged object or with an image of any object sharing one or more common attributes with the tagged object.3. The method of claim 1 , wherein the matching of the real-world object with the tagged object is further based on one or more of a location of the user device and a social network connection between the user and an author of the content item.4. The method of claim 1 , wherein the notification comprises one or more of a sound notification claim 1 , a pop-up notification claim 1 , a vibration claim 1 , and a icon in a display of the scene viewed by the camera of the user device.5. The method of claim 1 , wherein the content item comprises one or more of a text comment claim 1 , an image claim 1 , ...

Подробнее
30-03-2017 дата публикации

METHODS AND SYSTEMS OF PERFORMING PERFORMANCE CAPTURE USING AN ANATOMICALLY-CONSTRAINED LOCAL MODEL

Номер: US20170091529A1
Принадлежит:

Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization. The optimization can solve for rigid local patch motion, local patch deformation, and the rigid motion of the anatomical bones. The solution can be formulated as an energy minimization problem for each frame that is obtained for performance capture. 1. A computer-implemented method of performing facial performance tracking of a subject using an anatomically-constrained model of a face of the subject , the method comprising:obtaining the anatomically-constrained model, the anatomically-constrained model including a combination of a local shape subspace and an anatomical subspace, the local shape subspace including deformation shapes for each patch of a plurality of patches representing a geometry of the face, wherein a deformation shape of a patch defines a deformation of the patch for an observed facial expression, and wherein the anatomical subspace includes an anatomical bone structure constraining each of the plurality of patches;obtaining motion data of the face of the subject as the subject conducts a performance;determining, for each patch ...

Подробнее
19-03-2020 дата публикации

PRODUCT ONBOARDING MACHINE

Номер: US20200089997A1
Автор: Chaubard Francois
Принадлежит:

A method for generating training examples for a product recognition model is disclosed. The method includes capturing images of a product using an array of cameras. A product identifier for the product is associated with each of the images. A bounding box for the product is identified in each of the images. The bounding boxes are smoothed temporally. A segmentation mask for the product is identified in each bounding box. The segmentation masks are optimized to generate an optimized set of segmentation masks. A machine learning model is trained using the optimized set of segmentation masks to recognize an outline of the product. The machine learning model is run to generate a set of further-optimized segmentation masks. The bounding box and further-optimized segmentation masks from each image are stored in a master training set with its product identifier as a training example to be used to train a product recognition model. 1. A method comprising:obtaining image data of a product using a plurality of cameras, or depth cameras, the image data including a plurality of image frames and being captured while the product is moving in one or more dimensions;associating a product identifier for the product with each of the plurality of image frames;detecting a bounding box for the product in each of the plurality of image frames;using a bounding box smoothing algorithm to smooth the bounding box detections temporally;identifying a segmentation mask for the product in each bounding box;optimizing the segmentation masks for the product to generate an optimized set of segmentation masks for the product;training a machine learning model with the optimized set of segmentation masks;generating, using the machine learning model, a set of further-optimized segmentation masks for the product using the plurality of image frames as input, the set of further-optimized segmentation masks comprising a further-optimized segmentation mask for each bounding box and image frame of the ...

Подробнее
07-04-2016 дата публикации

REGISTRATION OF SAR IMAGES BY MUTUAL INFORMATION

Номер: US20160098838A1

A method for registering an image using a similarity criterion based on mutual information. The image to be registered is compared with a plurality of reference representations of an object, each reference representation being of a plurality of homogeneous zones. The mutual information between the image to be registered and each reference representation is calculated, on the set of homogeneous zones. The registration is given by the reference representation corresponding to the highest mutual information. The method can be advantageously applied to an aircraft aided navigation by registering images obtained by a synthetic aperture radar. 112-. (canceled)13. A method for registering an image of an object with respect to a plurality of reference representations of the object , each reference representation being of a set of homogeneous zones , each homogeneous zone having a homogeneous intensity level , the method comprising:calculating, for each reference representation of the plurality, mutual information between the image and the reference representation, on the set of homogeneous zones of the reference representation;comparing the mutual information thus calculated and the selected reference representation of the plurality for which the mutual information is highest.14. The method for registering an image according to claim 13 , wherein claim 13 , for each reference representation claim 13 , the mutual information is determined by:calculating entropy of the image on the set of homogeneous zones of the reference representation;calculating, for each homogeneous zone, a weighting factor, associated with the zone, corresponding to the ratio between the area of the zone and total area of the set of the homogeneous zones;calculating, for each homogeneous zone, the entropy of the image on the zone;calculating the difference between the entropy of the image on the set of the homogeneous zones and the weighted sum of the entropies of the image on the different homogeneous ...

Подробнее
19-03-2020 дата публикации

Electronic device having a vision system assembly held by a self-aligning bracket assembly

Номер: US20200092447A1
Принадлежит: Apple Inc

An electronic device that includes a vision system carried by a bracket assembly is disclosed. The vision system may include a first camera module that captures an image of an object, a light emitting element that emits light rays toward the object, and a second camera module that receives light rays reflected from the object. The light rays may include infrared light rays. The bracket assembly is designed not only carry the aforementioned modules, but to also maintain a predetermined and fixed separation between the modules. The bracket assembly may form a rigid, multi-piece bracket assembly to prevent bending, thereby maintaining the predetermined separation. The electronic device may include a transparent cover designed to couple with a housing. The transparent cover includes an alignment module designed to engage a module and provide a moving force that aligns the bracket assembly and the modules to a desired location in the housing.

Подробнее
28-03-2019 дата публикации

Apparatus and method for performing 3d estimation based on locally determined 3d information hypotheses

Номер: US20190095694A1

An apparatus for performing 3D estimation on the basis of pictures of at least two different views includes a hypotheses provider, a similarity measure calculator, and a 3D information determiner. The hypotheses provider locally determines 3D information hypotheses for positions of a current picture of a first view on the basis of a pre-estimate which associates a 3D information estimate to each position of a picture of the first view. The similarity measure calculator calculates, for each position of the current picture of the first view, a hypotheses of the respective position by measuring a similarity between a region similarity measure for each of the 3D information of the current picture of the first view at the respective position and a corresponding region of a second view at a position displaced relative to the respective position by a respective 3D information hypothesis. The 3D information determiner selects, for each position of the first view, the 3D information hypothesis of highest similarity measure.

Подробнее
28-03-2019 дата публикации

DEVICE AND METHOD FOR DETECTING ABNORMAL SITUATION

Номер: US20190095720A1
Принадлежит: S-1 CORPORATION

Provided is a detection method of a detection device. The detection device detects at least one object from an image by using depth information about the image obtained by a camera. The detection device identifies whether said at least one object is a person through three-dimensional (3D) head model matching, which matches a head candidate area of said at least one object with a 3D head model. The detection device calculates a feature for detection of a situation by using said at least one object when it is identified that said at least one object is the person. 1. A detection method of a detection device , the detection method comprising:detecting at least one object from an image by using depth information about the image obtained by a camera;identifying whether said at least one object is a person through three-dimensional (3D) head model matching, which matches a head candidate area of said at least one object with a 3D head model; andcalculating a feature for detection of a situation by using said at least one object when it is identified that said at least one object is the person.2. The detection method of claim 1 , wherein: 'designating a circle area, which is based on at least one of a first pixel having a minimum depth value based on the camera among pixels of said at least one object and a second pixel having a maximum height value based on a floor area of the image, and has a predetermined radius, as a first head candidate area.', 'the identifying includes3. The detection method of claim 2 , wherein: when a radio of the number of pixels of the first head candidate area to the number of entire pixels of said at least one object is a first threshold value or less, calculating a remaining area except for the first head candidate area in an area of said at least one object;', 'extracting at least one of a third pixel having a minimum depth value based on the camera among pixels of the remaining area and a fourth pixel having a maximum height value based on ...

Подробнее
26-03-2020 дата публикации

MULTI-STATE MAGNETIC RESONANCE FINGERPRINTING

Номер: US20200096589A1
Принадлежит:

The invention provides for a magnetic resonance imaging system () for acquiring magnetic resonance data () from a subject () within a measurement zone (). The magnetic resonance imaging system () comprises: a processor () for controlling the magnetic resonance imaging system () and a memory () storing machine executable instructions (), pulse sequence commands () and a dictionary (). The pulse sequence commands () are configured for controlling the magnetic resonance imaging system () to acquire the magnetic resonance data () of multiple steady state free precession (SSFP) states per repetition time. The pulse sequence commands () are further configured for controlling the magnetic resonance imaging system () to acquire the magnetic resonance data () of the multiple steady state free precession (SSFP) states according to a magnetic resonance fingerprinting protocol. The dictionary () comprises a plurality of tissue parameter sets. Each tissue parameter set is assigned with signal evolution data pre-calculated for multiple SSFP states. 1. A magnetic resonance imaging system for acquiring magnetic resonance data from a subject within a measurement zone , wherein the magnetic resonance imaging system comprises:a processor for controlling the magnetic resonance imaging system;a memory configured to store machine executable instructions, pulse sequence commands and a dictionary, wherein the pulse sequence commands are configured for controlling the magnetic resonance imaging system to acquire magnetic resonance data of multiple different steady state free precession (SSFP) states, wherein the pulse sequence commands are further configured for controlling the magnetic resonance imaging system to acquire the magnetic resonance data of the multiple different steady state free precession (SSFP) states according to a magnetic resonance fingerprinting protocol, the dictionary comprising a plurality of tissue parameter sets, each tissue parameter set being assigned with signal ...

Подробнее
29-04-2021 дата публикации

RECOGNITION FOR OVERLAPPED PATTERNS

Номер: US20210124907A1
Принадлежит:

In an approach, data of a plurality of points is sampled in a target area, wherein the data of each point of the plurality of points comprises position information and a height value. A first area of a target area is determined, wherein the height value of each point of the plurality of points in the first area complies with a first range. A second area of the target area is determined, wherein the height value of each point of the plurality of points in the second area complies with a second range. A third area of the target area is determined, wherein the height value of each point of the plurality of points in the third area complies with a third range. A first pattern is generated, wherein the first pattern is a combination of the first area and the third area. 1. A computer-implemented method comprising:obtaining, by one or more processors, data of a plurality of points sampled in a target area, wherein the data of each point of the plurality of points comprises position information and a height value, and wherein the position information indicates a position of a respective point in a reference plane of the target area and the height value indicates a vertical distance of the respective point to the reference plane;determining, by one or more processors, a first area of the target area, wherein the height value of each point of the plurality of points in the first area complies with a first range;determining, by one or more processors, a second area of the target area, wherein the height value of each point of the plurality of points in the second area complies with a second range;determining, by one or more processors, a third area of the target area, wherein the height value of each point of the plurality of points in the third area complies with a third range; andgenerating, by one or more processors, a first pattern that is a combination of the first area and the third area.2. The computer-implemented method of claim 1 , wherein the third range is ...

Подробнее
02-04-2020 дата публикации

INTELLIGENT ASSISTANT WITH INTENT-BASED INFORMATION RESOLUTION

Номер: US20200104653A1
Принадлежит: Microsoft Technology Licensing, LLC

A method for use with a computing device is provided. The method may include executing one or more programs of an intelligent digital assistant system at a processor and presenting a user interface to a user. At the processor, the method may include receiving natural language user input from the user, parsing the user input at an intent handler to determine an intent template with slots, populating the slots in the intent template with information from user input, and performing resolution on the intent template to partially resolve unresolved information. If a slot with missing slot information exists in the partially resolved intent template, a loop may be executed at the processor to fill the slots. The method may include, at the processor, determining that all required information is available and resolved and generating a rule based upon the intent template with all required information being available and resolved. 120-. (canceled)21. A method executed by a computing system of one or more computing devices , the method comprising:receiving user input via an interface of the computing device, the user input including natural language user input; populating one or more slots of the set of slots of the intent template with information based on the user input,', determining a state of the subject slot as at least one of unfilled or unresolved,', 'presenting a query for a user to fill or resolve the subject slot based on query selection criteria,', 'receiving a user response to the query,', 'altering the state of the subject slot based on the user response to the query, and', 're-executing the loop with the user response to the query being incorporated into the user input, and, 'if the intent template is partially resolved in which a subject slot of the set of slots is not both filled and resolved, performing the following additional actions as part of the loop, 'exiting the loop upon determining that the set of slots of the intent template are both filled and ...

Подробнее
13-05-2021 дата публикации

RADAR HEAD POSE LOCALIZATION

Номер: US20210141076A1
Принадлежит: Magic Leap, Inc.

An augmented reality device has a radar system that generates radar maps of locations of real world objects. An inertial measurement unit detects measurement values such as acceleration, gravitational force and inclination ranges. The values from the measurement unit drift over time. The radar maps are processed to determine fingerprints and the fingerprints are combined with the values from the measurement unit to store a pose estimate. Pose estimates at different times are compared to determine drift of the measurement unit. A measurement unit filter is adjusted to correct for the drift. 1. An augmented reality device comprising:a head-mountable frame;a radar system that generates first and second sets of radar fingerprints of locations of real-world objects relative to the user at first and second times;a measurement unit, secured to the frame, and detecting first and second measurement values at the first and second times, each measurement value being indicative of at least one of position and movement of the measurement unit;a measurement unit filter connected to the measurement unit;a sensor fusion module connected to the radar system and the measurement unit and operable to (i) determine first and second pose estimates, the first pose estimate being based on the first set of radar fingerprints relative to the first measurement value and the second pose estimate being based on the second set of radar fingerprints relative to the second measurement value, (ii) determine a drift of the measurement unit by comparing the first pose estimate with the second pose estimate, and (iii) adjust the measurement unit filter to correct for the drift;a rendering module to determine a desired position of a rendered object based on the second pose;an eyepiece secured to the frame; anda projector secured to the frame and operable to convert data into light to generate the rendered object and displaying the rendered object in the desired position to the user through the eyepiece ...

Подробнее
18-04-2019 дата публикации

METHOD AND SYSTEM FOR DISPLAYING AND NAVIGATING AN OPTIMAL MULTI-DIMENSIONAL BUILDING MODEL

Номер: US20190114834A1
Принадлежит: Hover Inc.

A method and system is provided for automatic generation and navigation of optimal views of facades of multi-dimensional building models based on where and how the original images were captured. The system and method allows for navigation and visualization of facades of individual or multiple building models in a multi-dimensional building model visualization system. 1. (canceled)2. A method of calculating an optimal camera position within a multi-dimensional building model , the method comprises:defining a look angle based at least partially on information obtained from a camera used during image capture of a building;defining a field of view by defining up, down, left and right angles which define an extent of the building;calculating a camera first main axis and a second main axis of the building to define camera orientation within the multi-dimensional building model;calculating the optimal camera position based on the look angle, the field of view and the camera orientation; andstoring the optimal camera position in computer storage.3. The method of claim 2 , wherein the information obtained from a camera used during image capture of a building includes camera position.4. The method of claim 3 , wherein the camera position is represented by camera metadata.5. The method of claim 4 , wherein the camera metadata includes one or more instances of contextual information about a sequence of the image capture of the building claim 4 , the one or more instances of contextual information including any of:GPS Latitude Ref;GPS Longitude Ref;GPS Altitude;GPS Dilution of Precision;GPS Img (image) Direction Ref (reference);GPS Img (image) Direction;metadata for gravity; ormetadata for the camera orientation.6. The method of claim 5 , wherein the sequence of the image capture of the building includes image capture starting at a front side claim 5 , with a counter-clockwise or clockwise capture to sequentially capture images of one or more of: front right corner claim 5 , ...

Подробнее
13-05-2021 дата публикации

APPARATUS AND METHOD FOR IDENTIFYING AN ARTICULATABLE PART OF A PHYSICAL OBJECT USING MULTIPLE 3D POINT CLOUDS

Номер: US20210142039A1
Принадлежит:

An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface. 1. A computer-implemented method , comprising:obtaining a first three-dimensional point cloud associated with a physical object having at least one articulatable part, the first point cloud associated with the physical object prior to articulation of the articulatable part;obtaining a second three-dimensional point cloud associated with the physical object after articulation of the articulatable part;coarsely aligning the first and second point clouds;finely aligning the first and second point clouds after coarsely aligning the first and second point clouds;eliminating, after finely aligning the first and second point clouds, points in the second point cloud such that remaining points in the second point cloud comprise at least points associated with the articulatable part; andgenerating an output comprising at least the remaining points of the second point cloud associated with the articulatable part.2. The method of claim 1 , wherein eliminating points in the second point cloud comprises:finding nearest neighbors of points in the ...

Подробнее
13-05-2021 дата публикации

FALL DETECTION AND ASSISTANCE

Номер: US20210142057A1
Принадлежит:

A method for controlling a robotic device based on observed object locations is presented. The method includes observing objects in an environment. The method also includes generating a probability distribution for locations of the observed objects. The method further includes controlling the robotic device to perform an action when an object is at a location in the environment with a location probability that is less than a threshold. 1. A method for controlling a robotic device based on observed object locations , comprising:observing objects in an environment;generating a probability distribution for locations of the observed objects; andcontrolling the robotic device to perform an action when an object is at a location in the environment with a location probability that is less than a threshold.2. The method of claim 1 , further comprising observing the objects over a period of time.3. The method of claim 2 , further comprising estimating a continuous distribution using the observations of the objects over the period of time.4. The method of claim 3 , in which the probability distribution is based on the continuous distribution.5. The method of claim 1 , further comprising:generating a cost map from the probability distribution;overlaying the cost map on the environment; andcontrolling the robotic device based on the cost map.6. The method of claim 5 , in which the action comprising at least one of providing assistance to the object claim 5 , contacting emergency services claim 5 , or a combination thereof.7. The method of claim 1 , in which the object is a human.8. An apparatus for controlling a robotic device based on observed object locations claim 1 , the apparatus comprising:a memory; and to observe objects in an environment;', 'to generate a probability distribution for locations of the observed objects; and', 'to control the robotic device to perform an action when an object is at a location in the environment with a location probability that is less than ...

Подробнее
05-05-2016 дата публикации

METHOD AND SYSTEM FOR AUTOMATICALLY OPTIMIZING QUALITY OF POINT CLOUD DATA

Номер: US20160125226A1
Автор: Huang Hui

Disclosed is a method for automatically optimizing point cloud data quality, including the following steps of: acquiring initial point cloud data for a target to be reconstructed, to obtain an initial discrete point cloud; performing preliminary data cleaning on the obtained initial discrete point cloud to obtain a Locally Optimal Projection operator (LOP) sampling model; obtaining a Possion reconstruction point cloud model by using a Possion surface reconstruction method on the obtained initial discrete point cloud; performing iterative closest point algorithm registration on the obtained Possion reconstruction point cloud model and the obtained initial discrete point cloud; and for each point on a currently registered model, calculating a weight of a surrounding point within a certain radius distance region of a position corresponding to the point for the point on the obtained LOP sampling model, and comparing the weight with a threshold, to determine whether a region where the point is located requires repeated scanning. Further disclosed is a system for automatically optimizing point cloud data quality. 1. A method for automatically optimizing point cloud data quality , comprising the following steps of:a. acquiring initial point cloud data for a target to be reconstructed, to obtain an initial discrete point cloud;b. performing preliminary data cleaning on the obtained initial discrete point cloud to obtain a Locally Optimal Projection operator (LOP) sampling model;c. obtaining a Possion reconstruction point cloud model by using a Possion surface reconstruction method on the obtained initial discrete point cloud;d. performing iterative closest point algorithm registration on the obtained Possion reconstruction point cloud model and the obtained initial discrete point cloud; ande. for each point on a currently registered model, calculating a weight of a surrounding point within a certain radius distance region of a position corresponding to the point for the ...

Подробнее
10-05-2018 дата публикации

APPARATUS, SYSTEMS, AND METHODS FOR PROVIDING THREE-DIMENSIONAL INSTRUCTION MANUALS IN A SIMPLIFIED MANNER

Номер: US20180129656A1
Принадлежит: The Parari Group, LLC

Interactive, electronic guides for an object may include one or more 3D models, and one or more associated tasks, such as how to assemble, operate, or repair an aspect of the object. A user electronic device may scan an encoded tag on the object, and transmit the scan data to an electronic guide distribution server. The server may receive an electronic guide generated by an electronic guide generator having a 3D model repository and a task repository, the guide associated with the encoded tag. Guide managers may add or modify 3D models and/or tasks to broaden the available guides, and tag producers may generate encoded tags using new and/or modified 3D models and tasks and apply tags to objects. 1. A system for disseminating electronic guides , the system comprising:an electronic guide generator having (1) a 3D model repository with a plurality of 3D models, each 3D model associated with an encoded tag, and (2) a task repository having a plurality of tasks, each task associated with the encoded tag, the electronic guide generator configured to generate an electronic guide having at least one 3D model from the 3D model repository and at least one task from the task repository in response to receiving a scan data;a plurality of user electronic devices, each user electronic device having (1) a scanner configured to scan an encoded tag on an object to generate the scan data including data associated with the encoded tag, (2) a scan data transmitter configured to transmit scan data, (3) an electronic guide receiver configured to receive an electronic guide, and (4) a display configured to display the received electronic guide;an electronic guide distribution server in electronic communication with the electronic guide generator and the plurality of user electronic devices, and configured to (1) receive a scan data from a user electronic device, (2) transmit the scan data to the electronic guide generator, (3) receive a generated electronic guide from the electronic guide ...

Подробнее
11-05-2017 дата публикации

SYSTEMS AND METHODS FOR YAW ESTIMATION

Номер: US20170132453A1
Принадлежит: Amrita E-Learning Research lab

Systems and methods of automatic detection of a facial feature are disclosed. Moreover, methods and systems of yaw estimation of a human head based on a geometrical model are also disclosed. 1. A method of automatic detection of a facial feature , comprising:a. obtaining a human image comprising a head, a neck, and a face;b. modeling the human head as an ellipse in vertical projection;c. fixing the center of rotation of the ellipse as the center of the neck;d. truncating the ellipse along the major axis representing a region of the face to an arc of ±60°;e. mapping a center of the face as the projection of a nose;f. locating boundaries of the face on the ellipse by identifying coordinates on the elliptic arc subtending ±60°; andg. computing a yaw angle of head rotation using position of the nose and the boundaries of the face.5. The method of claim 1 , the center of the ellipse and the left and right side boundaries of the ellipse satisfy the relation x−x=(x−x)+(x−x).9. The method of claim 6 , the center of the ellipse and the left and right side boundaries of the ellipse satisfy the relation x−x=(x−x)+(−x).10. An operator inattention monitoring system claim 6 , comprising:a processing device configured to determine an attention level of the operator, said device comprising a processor, memory and at least one communication interface;wherein the processing device is configured to obtain frontal images of the operator's head over a period of time, wherein the images comprise the head, the neck, and the face of the operator; andwherein the processing device is configured to analyze a plurality of yaw angles to determine the attention level of the operator.11. The system of claim 10 , wherein the processor is configured to: a) modeling the head as an ellipse in vertical projection;', 'b) fixing the center of rotation of the ellipse as the center of the neck;', 'c) truncating the ellipse along the major axis representing a region of the face to an arc of ±60°;', 'd) ...

Подробнее
23-04-2020 дата публикации

OBJECT RECOGNITION

Номер: US20200125830A1
Автор: Fan Jian, Lei Yang, Liu Jerry

A method of recognizing an object includes comparing a three-dimensional point cloud of the object to a three-dimensional candidate from a dataset to determine a first confidence score, and comparing color metrics of a two-dimensional image of the object to a two-dimensional candidate from the dataset to determine a second confidence score. The point cloud includes a color appearance calibrated from a white balance image, and the color appearance of the object is compared with the three-dimensional candidate. The first or second confidence score is selected to determine which of the three-dimensional candidate or the two-dimensional candidate corresponds with the object. 1. A method of recognizing an object , comprising:comparing a three-dimensional point cloud of the object to a three-dimensional candidate from a dataset to determine a first confidence score, the point cloud including a color appearance calibrated from a white balance image and the comparing including comparing the color appearance of the object with the three-dimensional candidate;comparing color metrics of a two-dimensional image of the object to a two-dimensional candidate from the dataset to determine a second confidence score; andselecting one of the first and second confidence scores to determine which of the three-dimensional candidate or the two-dimensional candidate corresponds with the object.2. The method of wherein the selecting includes selecting one of the first and second confidence scores if the three-dimensional candidate and the two-dimensional candidate do not both correspond with the object.3. The method of wherein the selected one of the first and second confidence scores at least meets a threshold.4. The method of wherein the comparing color metrics includes comparing local color keypoints.5. The method of wherein the first and second confidence scores are based on keypoints.6. The method of wherein the comparing the three-dimensional point cloud of the object and comparing ...

Подробнее
09-05-2019 дата публикации

Method for monitoring an orthodontic treatment

Номер: US20190133717A1
Принадлежит: Dental Monitoring SAS

A method including an acquisition, with an acquisition apparatus, of at least one two-dimensional mage of arches of a patient, called “updated image”, in actual acquisition conditions, a separator separating lips of the patient in order to improve the visibility of patient's teeth, said separator including a register mark, wherein the representation of the register mark on the updated image is used to recut the updated image and/or to roughly assess the actual acquisition conditions and/or to guide the positioning of the acquisition apparatus at the moment of the acquisition of the updated image and/or to identify a dental situation and/or an action to be achieved by the acquisition apparatus.

Подробнее
30-04-2020 дата публикации

Information processing apparatus, information processing method, and storage medium

Номер: US20200133388A1
Автор: Kazuki Takemoto
Принадлежит: Canon Inc

An information processing apparatus supplies, an image display apparatus including an image capturing unit configured to capture an image of a real space, and a display unit configured to display an image generated using the image captured by the image capturing unit, an image generated using the image captured by the image capturing unit. The information processing apparatus includes a generation unit configured to generate an image depicting a specific object at a position at which the specific object is estimated to be present after a predetermined time from a time when the image display apparatus starts to move in the captured image of the real space including the specific object, and a control unit configured to shift a position at which the image generated by the generation unit is displayed on the display unit based on a change in a position and/or an orientation of the image display apparatus.

Подробнее
09-05-2019 дата публикации

Method and System for Assessing Vessel Obstruction Based on Machine Learning

Номер: US20190139219A1
Принадлежит: Pie Medical Imaging B.V.

Methods and systems are provided for assessing the presence of functionally significant stenosis in one or more coronary arteries, further known as a severity of vessel obstruction. The methods and systems can implement a prediction phase that comprises segmenting at least a portion of a contrast enhanced volume image data set into data segments corresponding to wall regions of the target organ, and analysing the data segments to extract features that are indicative of an amount of perfusion experiences by wall regions of the target organ. The methods and systems can obtain a feature-perfusion classification (FPC) model derived from a training set of perfused organs, classify the data segments based on the features extracted and based on the FPC model, and provide, as an output, a prediction indicative of a severity of vessel obstruction based on the classification of the features. 130-. (canceled)31. A method for assessing a severity of vessel obstruction , comprising:a) obtaining a contrast enhanced volume image dataset for a target organ that includes at least one of a myocardium or a coronary artery, wherein at least a portion of the volume image data set is segmented into data segments;b) obtaining features indicative of a characteristic experienced by the data segments of the target organ;c) obtaining a feature-perfusion classification (FPC) model derived from a training set of perfused organs, wherein the FPC model includes a relationship between training features and a reference fluid dynamic parameter for corresponding data segments of the training set of perfused organs, wherein the reference fluid dynamic parameter comprises i) an invasive fractional flow reserve measurement, ii) an index of microcirculatory resistance, iii) an instantaneous wave-free ratio measurement, or iv) a coronary flow reserve measurement;d) classifying of the data segments based on the features obtained and based on the FPC model; ande) providing an output related to vessel ...

Подробнее
16-05-2019 дата публикации

ROBOTIC SYSTEM ARCHITECTURE AND CONTROL PROCESSES

Номер: US20190143523A1
Принадлежит:

A system includes a first sensor having a fixed location relative to a workspace, a second sensor, at least one robotic manipulator coupled to a manipulation tool, and a control system in communication with the at least one robotic manipulator. The control system is configured to determine a location of a workpiece in the workspace based on first sensor data from the first sensor and a three-dimensional (3D) model corresponding to the workpiece. The control system is configured to map a set of 2D coordinates from a second 2D image from the second sensor to a set of 3D coordinates based on the location, and to generate one or more control signals for the at least one robotic manipulator based on the set of 3D coordinates. 1. A system , comprising:a first sensor having a fixed location relative to a workspace;a second sensor;at least one robotic manipulator coupled to a manipulation tool and configured for movement in the workspace; anda control system in communication with the at least one robotic manipulator, the control system configured to determine a location of a workpiece in the workspace based on sensor data from the first sensor and a three-dimensional (3D) model corresponding to the workpiece, the control system configured to map a set of two-dimensional (2D) coordinates from a 2D image of the workpiece from the second sensor to a set of 3D coordinates based on the location, the control system configured to generate one or more control signals for the at least one robotic manipulator to manipulate a surface of the workpiece based on the set of 3D coordinates.2. The system of claim 1 , wherein:the at least one robotic manipulator includes a first robotic manipulator and a second robotic manipulator;the first robotic manipulator is coupled to the manipulation tool;the second robotic manipulator is coupled to the second sensor; andthe second sensor is an ultraviolet (UV) image capture device configured to generate the 2D image of the workpiece.3. The system of ...

Подробнее
16-05-2019 дата публикации

POSE ESTIMATION AND MODEL RETRIEVAL FOR OBJECTS IN IMAGES

Номер: US20190147221A1
Принадлежит:

Techniques are provided for selecting a three-dimensional model. An input image including an object can be obtained, and a pose of the object in the input image can be determined. One or more candidate three-dimensional models representing one or more objects in the determined pose can be obtained. From the one or more candidate three-dimensional models, a candidate three-dimensional model can be determined to represent the object in the input image. 1. A method of selecting a three-dimensional model , the method comprising:obtaining an input image including an object;determining a pose of the object in the input image;obtaining one or more candidate three-dimensional models representing one or more objects in the determined pose; anddetermining, from the one or more candidate three-dimensional models, a candidate three-dimensional model to represent the object in the input image.2. The method of claim 1 , further comprising generating an output image based on the candidate three-dimensional model and the input image.3. The method of claim 1 , further comprising:receiving a user input to manipulate the candidate three-dimensional model; andadjusting one or more of a pose or a location of the candidate three-dimensional model in an output image based on the user input.4. The method of claim 1 , further comprising:obtaining an additional input image, the additional input image including the object in one or more of a different pose or a different location than a pose or location of the object in the input image; andadjusting one or more of a pose or a location of the candidate three-dimensional model in an output image based on a difference between the pose or location of the object in the additional input image and the pose or location of the object in the input image.5. The method of claim 1 , wherein obtaining the one or more three-dimensional models representing the one or more objects includes:obtaining a plurality of three-dimensional models representing a ...

Подробнее
16-05-2019 дата публикации

LEARNING TO RECONSTRUCT 3D SHAPES BY RENDERING MANY 3D VIEWS

Номер: US20190147642A1
Принадлежит:

Methods, systems, and apparatus for obtaining first image features derived from an image of an object, providing the first image features to a three-dimensional estimator neural network, and obtaining, from the three-dimensional estimator neural network, data specifying an estimated three-dimensional shape and texture based on the first image features. The estimated three-dimensional shape and texture are provided to a three-dimensional rendering engine, and a plurality of three-dimensional views of the object are generated by the three-dimensional rendering engine based on the estimated three-dimensional shape and texture. The plurality of three-dimensional views are provided to the object recognition engine, and second image features derived from the plurality of three-dimensional views are obtained from the object recognition engine. A loss is computed based at least on the first and second image features, and the three-dimensional estimator neural network is trained based at least on the computed loss. 1. A computer-implemented method comprising:obtaining, from an object recognition engine, data specifying first image features derived from an image of an object;providing the first image features to a three-dimensional estimator neural network;obtaining, from the three-dimensional estimator neural network, data specifying (i) an estimated three-dimensional shape and (ii) an estimated texture that are each based on the first image features;providing the data specifying (i) the estimated three-dimensional shape and (ii) the estimated texture to a three-dimensional rendering engine;obtaining, from the three-dimensional rendering engine, data specifying a plurality of three-dimensional views of the object that are each generated based on the data specifying (i) the estimated three-dimensional shape and (ii) the estimated texture;providing the data specifying each of the plurality of three-dimensional views to the object recognition engine;obtaining, from the object ...

Подробнее
07-05-2020 дата публикации

CONTROL DEVICE, SYSTEM AND METHOD FOR DETERMINING THE PERCEPTUAL LOAD OF A VISUAL AND DYNAMIC DRIVING SCENE

Номер: US20200143183A1
Принадлежит:

The invention relates to a control device () for a vehicle for determining the perceptual load of a visual and dynamic driving scene. The control device is configured to: ⋅receive a sensor output () of a sensor (), the sensor () sensing the visual driving scene, ⋅extract a set of scene features () from the sensor output (), the set of scene features () representing static and/or dynamic information of the visual driving scene, ⋅determine the perceptual load () of the set of extracted scene features () based on a predetermined load model (), the load model () being predetermined based on reference video scenes each being labelled with a load value, ⋅map the perceptual load to the sensed driving and ⋅determine a spatial and temporal intensity distribution of the perceptual load across the sensed driving scene. The invention further relates to a vehicle, a system and a method. 1. A control device for a vehicle for determining the perceptual load of a visual and dynamic driving scene ,the control device being configured to:receive a sensor output of a sensor, the sensor sensing the visual driving scene,extract a set of scene features from the sensor output, the set of scene features representing static and/or dynamic information of the visual driving scene,determine the perceptual load of the set of extracted scene features based on a predetermined load model,the load model being predetermined based on reference video scenes each being labelled with a load value,map the perceptual load to the sensed driving scene, anddetermine a spatial and temporal intensity distribution of the perceptual load across the sensed driving scene.2. The control device according to claim 1 , whereinthe load model comprises a mapping function between sets of scene features extracted from the reference video scenes and the load values.3. The control device according to claim 1 , whereinthe load model is configured to map a set of scene features to a perceptual load value.4. The control device ...

Подробнее
01-06-2017 дата публикации

METHOD AND SYSTEM OF CURVED OBJECT RECOGNITION USING IMAGE MATCHING FOR IMAGE PROCESSING

Номер: US20170154204A1
Принадлежит:

A system, article, and method of curved object recognition using image matching for image processing. 1. A computer-implemented method of object recognition comprising:obtaining image data of at least one reference image having at least one curved reference object and from a plurality of reference images each with at least one reference object, and wherein the image data comprises at least three-dimensional coordinates of three-dimensional (3D) points on the at least one curved reference object;obtaining two-dimensional (2D) image data of a curved target object in a query image;matching the target object to at least one of the reference objects comprising pairing at least one 2D point from the target object with a corresponding 3D point on at least one of the reference objects; andusing the paired 2D-3D point(s) to form a perspective projection function to determine a geometric correspondence between the target object and reference object(s) and that converts the 2D points into 3D points at the target object.2. The method of wherein the curved objects are at least partially cylindrical.3. The method of wherein the reference image initially includes 2D image data and the three-dimensional coordinates for the reference images in the database are generated without the use of a depth map.4. The method of comprising generating the three-dimensional coordinates of the reference objects by using perspective projection from a virtual 3D surface selected to have a shape similar to the shape of the reference object.5. The method of comprising selecting one or more candidate reference images from the plurality of reference images to pair the 3D points of the reference objects of the candidate reference images to the 2D points of the target object of the query image.6. The method of wherein selecting one or more candidate reference images comprises selecting candidate reference images with reference objects depending on claim 5 , at least in part claim 5 , a similarity in ...

Подробнее
28-08-2014 дата публикации

METHOD AND DEVICE FOR PERFORMING TRANSITION BETWEEN STREET VIEW IMAGES

Номер: US20140240311A1

Transitions between street view images are described. The described techniques include: obtaining an original street view image and a target street view image; constructing a three-dimensional model corresponding to the original street view image by three-dimensional modeling; obtaining matching pairs of feature points that are extracted from the original street view image and the target street view image, and simulating a virtual camera in the three-dimensional model according to matching pairs of feature points to capture street view image sequence; and switching from the original street view image to the target street view image according to the street view image sequence. Transition stability is thereby improved. 1. A method for performing transition between street view images , comprising:obtaining an original street view image and a target street view image;constructing a three-dimensional model corresponding to the original street view image by three-dimensional modeling;obtaining matching pairs of feature points that are extracted from the original street view image and the target street view image, and simulating a virtual camera in the three-dimensional model according to the matching pairs of feature points to capture a street view image sequence; andswitching from the original street view image to the target street view image according to the street view image sequence.2. The method of claim 1 , wherein obtaining the original street view image and the target street view image comprises:obtaining a first panoramic image where the original street view image is located, and a second panoramic image where the target street view image is located; andobtaining the original street view image and the target street view image by capturing the first and the second panoramic images respectively.3. The method of claim 2 , wherein obtaining the original street view image and the target street view image by capturing the first and the second panoramic images ...

Подробнее
14-05-2020 дата публикации

Laser Speckle System and Method for an Aircraft

Номер: US20200150217A1
Принадлежит:

A system for registering multiple point clouds captured by an aircraft is disclosed. The system includes a speckle generator, at least one three-dimensional (3D) scanner, and a processor coupled thereto. In operation, the speckle generator projects a laser speckle pattern onto a surface (e.g., a featureless surface). The at least one 3D scanner scans the featureless surface to generate a plurality of point clouds of the featureless surface and to image at least a portion of the laser speckle pattern. The processor, which is communicatively coupled with the at least one 3D scanner, registers the plurality of point clouds to generate a complete 3D model of the featureless surface based at least in part on the laser speckle pattern. 1. A method for registering multiple point clouds via an aircraft , the method comprising:projecting, via a speckle generator, a first laser speckle pattern onto a featureless surface for inspection by the aircraft;scanning, via at least one three-dimensional (3D) scanner coupled to the aircraft, the featureless surface;generating, via the at least one 3D scanner, a plurality of point clouds of the featureless surface;imaging, via the at least one 3D scanner, at least a portion of the first laser speckle pattern;performing, via a processor communicatively coupled with the at least one 3D scanner, a rough registration of the plurality of point clouds; andgenerating, via the processor, a 3D model of the featureless surface based at least in part on the first laser speckle pattern.2. The method of claim 1 , further comprising the step of performing claim 1 , via the processor claim 1 , a fine registration of the plurality of point clouds using one or more algorithms.3. The method of claim 1 , further comprising the step of identifying claim 1 , via the processor claim 1 , a speckle pattern by selecting from the first laser speckle pattern a plurality of random nearest neighbor dots.4. The method of claim 3 , further comprising the step of ...

Подробнее
14-05-2020 дата публикации

AUTONOMOUS SEGMENTATION OF THREE-DIMENSIONAL NERVOUS SYSTEM STRUCTURES FROM MEDICAL IMAGES

Номер: US20200151507A1
Принадлежит:

A method for autonomous segmentation of three-dimensional nervous system structures from raw medical images, the method including: receiving a 3D scan volume with a set of medical scan images of a region of the anatomy; autonomously processing the set of medical scan images to perform segmentation of a bony structure of the anatomy to obtain bony structure segmentation data; autonomously processing a subsection of the 3D scan volume as a 3D region of interest by combining the raw medical scan images and the bony structure segmentation data, wherein the 3D ROI contains a subvolume of the bony structure with a portion of surrounding tissues, including the nervous system structure; autonomously processing the ROI to determine the 3D shape, location, and size of the nervous system structures by means of a pre-trained convolutional neural network (CNN). 1. A method for autonomous segmentation of three-dimensional nervous system structures from raw medical images , the method comprising:receiving a 3D scan volume comprising a set of medical scan images of a region of the anatomy;autonomously processing the set of medical scan images to perform segmentation of a bony structure of the anatomy to obtain bony structure segmentation data;autonomously processing a subsection of the 3D scan volume as a 3D region of interest (ROI) by combining the raw medical scan images and the bony structure segmentation data, wherein the 3D ROI contains a subvolume of the bony structure with a portion of surrounding tissues, including a nervous system structure;autonomously processing the ROI to determine a 3D shape, location, and size of the nervous system structure by means of a pre-trained convolutional neural network (CNN).2. The method according to claim 1 , further comprising 3D resizing of the ROI.3. The method according to claim 1 , further comprising visualizing the output including the segmented nervous system structure.4. The method according to claim 1 , further comprising ...

Подробнее
23-05-2019 дата публикации

3D BUILDING MODEL MATERIALS AUTO-POPULATOR

Номер: US20190156570A1
Принадлежит: Hover Inc.

A system and method is provided for automatic building material ordering that includes directing capture of building images of the building at a location, building a scaled multi-dimensional building model based on the building images, extracting, based on the scaled multi-dimensional building model, dimensions of at least one architectural feature from the scaled multi-dimensional building model, identifying a set of possible manufacturer products matching the dimensions of the at least one architectural feature, receiving user preferences related to the set of possible manufacturer products, auto-populating, based on the user preferences, a select list of the manufacturer products, auto-ordering manufacturer products from the select list of the manufacturer products and auto-tracking the ordered manufacturer products until delivery to the location. 1. A method of automatic building material ordering comprises:receiving a plurality of captured ground-based building images;retrieving a scaled multi-dimensional building model, wherein the scaled multi-dimensional building model includes one or more architectural elements present in the captured ground-based building images;determining measurements of the one or more architectural elements located on one or more planes within the scaled multi-dimensional building model, the measurements based on the scale;determining manufacturer product information corresponding to the measurements of the one or more architectural elements;determining cost and shipping information from one or more providers of manufacturer products based on the measurements and the manufacturer product information; andordering, based on receiving user acceptance of the cost and shipping information, the manufacturer products.2. The method of further comprises tracking delivery of the manufacturer products to a location of the captured ground-based building images.3. The method of further comprises determining an aggregated dimension of one or more of ...

Подробнее
24-06-2021 дата публикации

SYSTEMS AND METHODS FOR GHOST OBJECT CLASSIFICATION

Номер: US20210192235A1
Автор: Bolduc Andrew Phillip
Принадлежит:

A system includes a sensor, which is configured to detect a plurality of objects within an area, and a computing device in communication with the sensor. The computing device is configured to determine that one of the plurality of objects is static, determine that one of the plurality of objects is temporary, determine a geometric relationship between the temporary object and the static object, and determine whether one of the plurality of objects is a ghost object based on the geometric relationship. 1. A system , comprisinga sensor configured to detect a plurality of objects within an area; anda computing device in communication with the sensor and configured to determine that one of the plurality of objects is static, determine that one of the plurality of objects is temporary, determine a geometric relationship between the temporary object and the static object, and determine whether one of the plurality of objects is a ghost object based on the geometric relationship.2. The system as recited in claim 1 , wherein the geometric relationship includes a first distance claim 1 , wherein the first distance is a distance from the temporary object to a point on the static object.3. The system as recited in claim 2 , wherein the computing device is configured to compare the first distance to a second distance claim 2 , wherein the second distance is a distance from the ghost object to the point.4. The system as recited in claim 3 , wherein the first distance being equal to the second distance is indicative of the ghost object not being present.5. The system as recited in claim 1 , wherein the temporary object is a moving vehicle.6. The system as recited in claim 2 , wherein the static object is a guardrail.7. The system as recited in claim 1 , wherein the sensor is a radar sensor.8. The system as recited in claim 1 , wherein the sensor is a LIDAR.9. A method claim 1 , comprising:detecting, with a sensor, a plurality of objects within an area;determining that one of the ...

Подробнее
25-06-2015 дата публикации

THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD

Номер: US20150178575A1
Принадлежит:

A three-dimensional object detection device has an image capturing unit, a three-dimensional object detection unit, a host vehicle speed detection unit, a light source detection unit and a controller. The image capturing unit captures images rearward of a vehicle. The three-dimensional object detection unit detects a presence of a three-dimensional object in a detection area, based on the captured images. The host vehicle speed detection unit detects a vehicle traveling speed. The light source detection unit detects a headlight light source of a headlight of another vehicle. The controller compares the traveling speeds of the object and the vehicle upon not detecting the headlight light source, and suppresses detection of the object upon determining one of the object traveling speed being equal to or less than the vehicle traveling speed, and a difference between the object and vehicle traveling speeds being less than a predetermined value.

Подробнее
01-07-2021 дата публикации

USING PHOTOGRAMMETRY TO AID IDENTIFICATION AND ASSEMBLY OF PRODUCT PARTS

Номер: US20210199434A1
Принадлежит: Wayfair LLC

A user may be aided in modifying a product that is an assemblage of parts. This aid may involve a processor obtaining images of a target part captured by the user on a mobile device camera. The processor may compare, based on the captured images and a plurality of images of identified parts, the target part to the identified parts. Based on the comparison, the processor may determine an identity of the target part. This aid may also involve a processor obtaining images of a first configuration of a partial assembly of the product captured by a mobile device camera. The processor may compare, based on the captured images, the first configuration to a correct configuration of the partial assembly. Based on the comparison, the processor may determine that the first configuration does not match the correct configuration and may notify the user accordingly. 111.-. (canceled)12. A method for aiding a user in assembling a product , the method comprising: obtaining at least one first image of a partially assembled configuration of the product, the at least one first image obtained using a camera;', 'determining an identity of at least one unidentified part of the partially assembled configuration;', 'obtaining, based on the determined identity of the at least one unidentified part, at least one second image of a correct configuration of an assembly of the product;', 'comparing the at least one first image of the partially assembled configuration with the at least one second image of the correct configuration;', 'generating, based on results of the comparing, a notification for the user indicative of whether the partially assembled configuration is correct or incorrect; and', 'providing the notification to the user., 'using at least one computer hardware processor to perform13. The method of claim 12 , wherein the at least one first image comprises an image of the partially assembled configuration captured from a first angle or view and another image of the partially ...

Подробнее
01-07-2021 дата публикации

METHOD AND SYSTEM FOR DISPLAYING AND NAVIGATING AN OPTIMAL MULTI-DIMENSIONAL BUILDING MODEL

Номер: US20210201579A1
Принадлежит: Hover Inc.

Visualizing three dimensional content is complicated by display platforms capable of more degrees of freedom to display the content than interface tools have to navigate that content. Disclosed are methods and systems for displaying select portions of the content and generating virtual camera positions with associated look angles for the select portions, such as planar geometries of a three dimensional building, thereby constraining the degrees of freedom for improved navigation through views of the content. Look angles can be associated with axes of the content and fields of view. 112-. (canceled)13. A method of visualizing three dimensional content , the method comprising:receiving a three dimensional (3D) building model;calculating a first main axis and a second main axis of the 3D building model to define an orientation of the 3D building model;defining a look angle for the 3D building model;determining architectural features of the 3D building model;calculating an optimal camera position for the 3D building model based on the defined look angle, the orientation of the 3D building model, and the determined architectural features, the optimal camera position defining an optimal view of the 3D building model; anddisplaying the optimal view of the 3D building model.14. The method of claim 13 , wherein calculating the first main axis comprises grouping lines of the 3D building model within an angular threshold of one another and selecting the group with the largest sum of weighted edge length for the grouped lines.15. The method of claim 13 , wherein the second main axis is perpendicular to the first main axis.16. The method of claim 13 , wherein defining the look angle comprises defining the look angle for each of at least two planar geometries of the 3D building model.17. The method of claim 13 , wherein determining the architectural features comprises:identifying the architectural features of the 3D building model; andranking the identified architectural features ...

Подробнее
28-05-2020 дата публикации

Automatic Body Movement Recognition and Association System

Номер: US20200167555A1
Принадлежит: Kintrans Inc

An automatic body movement recognition and association system that includes a preprocessing component and a “live testing” engine component. The system further includes a transition posture detector module and a recording module. The system uses three dimensional (3D) skeletal joint information from a stand-alone depth-sensing capture device that detects the body movements of a user. The transition posture detector module detects the occurrence of a transition posture and the recording module stores a segment of body movement data between occurrences of the transition posture. The preprocessing component processes the segments into a preprocessed movement that is used by a classifier component in the engine component to produce text or speech associated with the preprocessed movement. An “off-line” training system that includes a preprocessing component, a training data set, and a learning system also processes 3D information, off-line from the training data set or from the depth-sensing camera, to continually update the training data set and improve a learning system that sends updated information to the classifier component in the engine component when the updated information is shown to improve accuracy.

Подробнее
28-05-2020 дата публикации

Three-dimensional Human Face Reconstruction Method

Номер: US20200167990A1
Принадлежит:

The invention is related to a method of three-dimensional face reconstruction by inputting a single face image to reconstruct a three-dimensional face model, therefore, the human face image is seen at various angles of three-dimensional face through rotating the model images. 1. A method of three-dimensional human face reconstruction , comprisinginputting a two-dimensional face image;positioning said two-dimensional feature points for said two-dimensional face image, obtaining a plurality of two-dimensional feature point positions for said two-dimensional face image;converting said plurality of two-dimensional feature points into a plurality of three-dimensional coordinates, and converting said plurality of two-dimensional feature points into a corresponding said plurality of three-dimensional coordinates in accordance with an approximate computing, forming said plurality of three-dimensional coordinates to a first three-dimensional face model;finely tuning a three-dimensional face shape of said first three-dimensional face model, in order to obtain a second three-dimensional face model;compensating a face color of said second three-dimensional face model, in order to obtain a third three-dimensional face model; andoutputting a three-dimensional face image in accordance with said third three-dimensional face model.2. The three-dimensional human face reconstruction method according to claim 1 , wherein the positioning method of the two-dimensional feature points comprises a neural network model.3. The three-dimensional human face reconstruction method according to claim 1 , wherein the three-dimensional face model comprises a color three-dimensional face model.4. The three-dimensional human face reconstruction method according to claim 1 , wherein upon converting the plurality of two-dimensional feature points into a plurality of three-dimensional coordinates and finely tuning a three-dimensional face shape of the first three-dimensional face model claim 1 , the ...

Подробнее
04-06-2020 дата публикации

METHOD FOR INTRAORAL SCANNING DIRECTED TO A METHOD OF PROCESSING AND FILTERING SCAN DATA GATHERED FROM AN INTRAORAL SCANNER

Номер: US20200170760A1
Автор: Dawood Andrew
Принадлежит:

A method and apparatus for generating and displaying a 3D representation of a portion an intraoral scene is provided. The method includes determining 3D point cloud data representing a part of an intraoral scene in a point cloud coordinate space. A colour image of the same part of the intraoral scene is acquired in camera coordinate space. The colour image elements are labelled that are within a region of the image representing a surface of said intraoral scene, which should preferably not be included in said 3D representation. A labelled and applicably transformed colour image is then mapped onto the 3D point cloud data, whereby the 3D point cloud data points that map onto labelled colour image elements are removed or filtered out. A 3D representation is generated from said filtered 3D point cloud data, which does not include any of the surfaces represented by the labelled colour image elements. 1. An intraoral scanning method for generating a 3D representation of at least a portion of an intraoral scene , the method comprising:obtaining a scanning dataset, which comprises 3D point cloud data representing a part of the intraoral scene in a point cloud coordinate space and a colour image of said part of said intraoral scene in a camera coordinate space,labelling image elements of said colour image within a region having a colour or colour pattern corresponding either to (i) a surface colour or surface colour pattern of a utensil used intraorally while obtaining said scanning dataset or (ii) to a colour pattern corresponding to a colour pattern of a tooth surface area comprising undesired stains or particles,filtering out of said 3D point cloud data, data points that map to labelled image elements of said colour image, andgenerating a 3D representation from said filtered 3D point cloud data.2. The intraoral scanning method as in claim 1 , comprising obtaining a plurality of scanning datasets wherein at least some of said scanning datasets comprise overlapping spatial ...

Подробнее
30-06-2016 дата публикации

USER AUTHENTICATION SYSTEM AND METHOD

Номер: US20160188861A1
Автор: Todeschini Erik
Принадлежит:

A user authentication system includes an augmented reality device with a gesture analyzer configured for recognizing a user's gestures. The augmented reality device also includes an object renderer in communication with the gesture analyzer. The object renderer is configured for (i) rendering a virtual three-dimensional object for display to the user (ii) modifying the shape of the virtual three-dimensional object based upon the recognized gestures. 1. A user authentication system for authenticating a user in a computer environment , comprising: ["a gesture analyzer configured for recognizing a user's gestures; and", 'an object renderer in communication with the gesture analyzer, the object renderer configured for (i) rendering a virtual three-dimensional object for display to the user (ii) modifying the shape of the virtual three-dimensional object based upon the recognized gestures;, 'an augmented reality device, comprisingan authentication database for storing an authentication object; anda verification subsystem in communication with the augmented reality device and the authentication database, the verification subsystem configured for (i) receiving the virtual three-dimensional object having a modified shape from the object renderer, (ii) receiving the authentication object from the authentication database, (iii) comparing the virtual three-dimensional object having a modified shape to the authentication object, and (iv) authenticating the user if the virtual three-dimensional object's modified shape matches the authentication object's shape.2. The user authentication system of claim 1 , wherein the gesture analyzer is a three-dimensional depth sensor configured for converting a user's hand gesture into an associated change in the shape of the virtual three-dimensional object.3. The user authentication system of claim 1 , wherein the gesture analyzer is configured to allow the user to make a plurality of modifications to the shape of the virtual three- ...

Подробнее
30-06-2016 дата публикации

IDENTIFYING MATCHING PROPERTIES BETWEEN A GROUP OF BODIES REPRESENTING A GEOLOGICAL STRUCTURE AND A TABLE OF PROPERTIES

Номер: US20160188956A1
Принадлежит:

Systems and methods for identifying matching properties between a group of bodies representing a geological structure and a table of properties by performing property matching on the group of bodies to convert each body to a respective compartment represented by a triangulated mesh of the bounding body. 1. A method for identifying matching properties between a group of bodies representing a geological structure and a table of properties , which comprises:identifying each inherent property in the table with a value that is identical to a value for an inherent property of one of the bodies in the group of bodies using a computer processor, wherein each body with an inherent property value that is identical to an inherent property value in the table represents a matching body;identifying each inherent property in the table with a value that is within a predefined tolerance of a value for an inherent property of one of the bodies in the group of bodies that is not a matching body using the computer processor, wherein each body with an inherent property value that is within the predefined tolerance of an inherent property value in the table and is not a matching body represents a related body;associating each inherent property value in the table that is identical to an inherent property value of one of the bodies in the group of bodies with the respective body representing a matching body; andassociating each inherent property value in the table that is within the predefined tolerance of an inherent property value of one of the bodies in the group of bodies with the respective body representing a related body.2. The method of claim 1 , further comprising identifying each inherent property in the table with a value that is not within a predefined tolerance of a value for an inherent property of one of the bodies in the group of bodies claim 1 , wherein each body with an inherent property value that is not within a predefined tolerance of an inherent property value in the ...

Подробнее
30-06-2016 дата публикации

Method and arrangement for identifying a difference between a first 3d model of an environment and a second 3d model of the environment

Номер: US20160188957A1
Принадлежит: Vricon Systems AB

The invention relates to a method for identifying a difference between a first 3D model of an environment and a second 3D model of the environment. The first and second 3D model each comprise a plurality of points or parts, wherein each point or part of the first and second model comprises geometrical information and texture information. Corresponding points or parts of the first and second 3D model are matched based on the geometrical information and/or the texture information. The matched points or parts of the first and second model are compared to determine at least one difference value based on the geometrical information and the texture information of the first and second model. A difference between the first and second model is identified if the at least one difference value exceeds a predetermined value. The invention also relates to an arrangement, a computer program, and a computer program product.

Подробнее