Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 4617. Отображено 100.
16-02-2012 дата публикации

System for adaptive displays

Номер: US20120038751A1
Автор: Chang Yuan, Scott J. Daly
Принадлежит: Sharp Laboratories of America Inc

A display includes an integrated imaging sensor and a plurality of pixels. The imaging sensor integrated within the display includes a plurality of individual sensors each of which provides an output. The content of the display is modified based upon the sensed content.

Подробнее
06-12-2012 дата публикации

Online environment mapping

Номер: US20120306847A1
Принадлежит: Honda Motor Co Ltd

A system and method are disclosed for online mapping of large-scale environments using a hybrid representation of a metric Euclidean environment map and a topological map. The system includes a scene module, a location recognition module, a local adjustment module and a global adjustment module. The scene flow module is for detecting and tracking video features of the frames of an input video sequence. The scene flow module is also configured to identify multiple keyframes of the input video sequence and add the identified keyframes into an initial environment map of the input video sequence. The location recognition module is for detecting loop closures in the environment map. The local adjustment module enforces local metric properties of the keyframes in the environment map, and the global adjustment module is for optimizing the entire environment map subject to global metric properties of the keyframes in the keyframe pose graph.

Подробнее
07-02-2013 дата публикации

Method and device for calculating a depth map from a single image

Номер: US20130034297A1

A method for calculating a depth map from an original matrix image, comprising the steps of: calculating a first matrix image corresponding to the original matrix image with a low resolution and in which the depth of field is similar to that of the original matrix image, calculating a second matrix image corresponding to the original matrix image with a low resolution, comprising a number of pixels similar to that of the first matrix image and in which the depth of field is greater than that of the original matrix image, implementing a DFD type three-dimensional reconstruction algorithm from the first and the second matrix images, outputting the depth map.

Подробнее
14-03-2013 дата публикации

Non-contact scanning system

Номер: US20130063731A1
Принадлежит: Gaspardo & Associates Inc

A non-contact scanning system for three dimensional non-contact scanning of a work piece is disclosed for use in various applications including reverse engineering, metrology, dimensional verification and inspection The scanning system includes a scanner carried by an arcuately configured gantry assembly and a fixture for carrying a work piece. The gantry assembly includes a fixed arcuately shaped gantry member and a telescopic arm that is movable in an arcuate direction relative to a rotary table that carries the object to be scanned. A scanner is mounted on the end of the telescopic member and is movable in a radial direction. Objects to be scanned are mounted on a rotary table that is also movable in an X-Y direction or alternatively in the X, Y and Z directions under the control of a motion control subsystem, a machine control user interface subsystem and an image capture. The configuration of the scanning system in accordance with the present invention provides a spherically shaped scanning envelope which facilitates three dimensional modeling of the work piece.

Подробнее
30-05-2013 дата публикации

Method for adjusting moving depths of video

Номер: US20130135430A1
Принадлежит: NOVATEK MICROELECTRONICS CORP

A method for adjusting moving depths for a video is provided, which is adapted for 2D to 3D conversion. The method includes receiving a plurality of frames at a plurality time points and calculating a plurality of local motion vectors and a global motion vector in each of the frames. The method also includes determining a first difference degree between the local motion vectors and the global motion vector in the frames. The method further includes determining a second difference degree between a current frame and the other frames of the frames. The method also includes calculating a gain value according to the first difference degree and the second difference degree. The method further includes adjusting original moving depths of the current frame according to the gain value. Accordingly, a phenomenon of depth inversion can be avoided or mitigated.

Подробнее
18-07-2013 дата публикации

System and method for monitoring a retail environment using video content analysis with depth sensing

Номер: US20130182114A1
Принадлежит: Objectvideo Inc

A method and system for monitoring a retail environment by performing video content analysis based on two-dimensional image data and depth data are disclosed. Accuracy in customer actions to provide assistance, change marketing behavior, safety and theft, for example, is increase by analyzing video containing two-dimensional image data and associated depth data. Height data may be obtained from depth data to assist in object detection, object classification (e.g., detection a customer or inventory) and/or event detection.

Подробнее
18-07-2013 дата публикации

System and method for video content analysis using depth sensing

Номер: US20130182904A1
Принадлежит: Objectvideo Inc

A method and system for performing video content analysis based on two-dimensional image data and depth data are disclosed. Video content analysis may be performed on the two-dimensional image data, and then the depth data may be used along with the results of the video content analysis of the two-dimensional data for tracking and event detection.

Подробнее
10-10-2013 дата публикации

Nonlinear Self-Calibration for Structure From Motion (SFM) Techniques

Номер: US20130265443A1
Автор: Hailin Jin
Принадлежит: Adobe Systems Inc

A nonlinear self-calibration technique that may, for example, be used to convert a projective reconstruction to metric (Euclidian) reconstruction. The self-calibration technique may use a nonlinear least squares optimization technique to infer the parameters. N input images and a projective reconstruction for each image may be obtained. At least two sets of initial values may be determined for an equation to be optimized according to the nonlinear optimization technique to generate a metric reconstruction for the set of N images. The equation may then be optimized using each set of initial values according to the nonlinear optimization technique. The result with a smaller cost may be selected. The metric reconstruction is output. The output may include, but is not limited to, focal length, rotation, and translation values for the N images.

Подробнее
10-10-2013 дата публикации

Keyframe Selection for Robust Video-Based Structure from Motion

Номер: US20130266180A1
Автор: Hailin Jin
Принадлежит: Adobe Systems Inc

An adaptive technique is described for iteratively selecting and reconstructing keyframes to fully cover an image sequence that may, for example, be used in an adaptive reconstruction algorithm implemented by a structure from motion (SFM) technique. A next keyframe to process may be determined according to an adaptive keyframe selection technique. The determined keyframe may be reconstructed and added to the current reconstruction. A global optimization may be performed on the current reconstruction. One or more outlier points may be determined and removed from the reconstruction. One or more inlier points may be determined and recovered. If the number of inlier points that were added exceeds a threshold, then global optimization may again be performed. If the current reconstruction is a projective construction, self-calibration may be performed to upgrade the projective reconstruction to a Euclidean reconstruction.

Подробнее
12-12-2013 дата публикации

Image processing apparatus that estimates distance information, method of controlling the same, and storage medium

Номер: US20130329019A1
Автор: Masaaki Matsuoka
Принадлежит: Canon Inc

An image processing apparatus free from the inconvenience of an increase in the number of focal positions for image pickup or reduction of the distance accuracy. An optical imaging system forms an object image and an image pickup device picks up the formed object image. A first range image is generated from a plurality of images picked up by the image pickup device and having parallax. A second range image is generated from a plurality of images picked up by the image pickup device and having different degrees of focus at respective corresponding locations therein. A synthesis coefficient is calculated according to the depth of field of a main object selected from objects shown in the object image. A synthesized range image is generated by synthesizing the generated first range image and second range image using the calculated synthesis coefficient.

Подробнее
04-01-2018 дата публикации

Sparse simultaneous localization and matching with unified tracking

Номер: US20180005015A1
Автор: Craig Cambias, Xin Hou
Принадлежит: VanGogh Imaging Inc

Described herein are methods and systems for tracking a pose of one or more objects represented in a scene. A sensor captures a plurality of scans of objects in a scene, each scan comprising a color and depth frame. A computing device receives a first one of the scans, determines two-dimensional feature points of the objects using the color and depth frame, and retrieves a key frame from a database that stores key frames of the objects in the scene, each key frame comprising map points. The computing device matches the 2D feature points with the map points, and generates a current pose of the objects in the color and depth frame using the matched 2D feature points. The computing device inserts the color and depth frame into the database as a new key frame, and tracks the pose of the objects in the scene across the scans.

Подробнее
07-01-2021 дата публикации

DEPTH FROM MOTION FOR AUGMENTED REALITY FOR HANDHELD USER DEVICES

Номер: US20210004979A1
Принадлежит:

A handheld user device includes a monocular camera to capture a feed of images of a local scene and a processor to select, from the feed, a keyframe and perform, for a first image from the feed, stereo matching using the first image, the keyframe, and a relative pose based on a pose associated with the first image and a pose associated with the keyframe to generate a sparse disparity map representing disparities between the first image and the keyframe. The processor further is to determine a dense depth map from the disparity map using a bilateral solver algorithm, and process a viewfinder image generated from a second image of the feed with occlusion rendering based on the depth map to incorporate one or more virtual objects into the viewfinder image to generate an AR viewfinder image. Further, the processor is to provide the AR viewfinder image for display. 1. A method for providing an augmented reality (AR) experience at a handheld user device , the method comprising:capturing, via a monocular camera of the handheld user device, a feed of images of a local scene;selecting, from the feed, a keyframe;performing, for a first image from the feed of images, stereo matching using the first image, the keyframe, and a relative pose based on a pose associated with the first image and a pose associated with the keyframe to generate a sparse disparity map representing disparities between the first image and the keyframe;determining a dense depth map from the disparity map using a bilateral solver algorithm;processing a viewfinder image generated from a second image of the feed with occlusion rendering based on the depth map to incorporate one or more virtual objects into the viewfinder image to generate an AR viewfinder image; anddisplaying, at the handheld user device, the AR viewfinder image.2. The method of claim 1 , further comprising:polar rectifying the keyframe and the first image; andwherein performing stereo matching comprising performing stereo matching using ...

Подробнее
03-01-2019 дата публикации

Method And Apparatus For Map Constructing And Map Correcting

Номер: US20190005669A1
Автор: Beichen Li, Yujie JIANG
Принадлежит: Guangzhou Airob Robot Technology Co ltd

A method for map constructing, applicable for real-time mapping of a to-be-localized area provided with at least one laser device, includes taking a position of a mobile electronic as a coordinate origin of a map coordinate system, when a center of a mark projected by a first laser device coincides with central point of CCD/CMOS; moving the mobile electronic device with the coordinate origin as a starting point to traverse the entire to-be-localized area, calculating and recording coordinate values of a position of one of obstacles each time when it is detected by the mobile electronic device; and constructing a map based on recorded information of mark and corresponding coordinate values and the coordinate values of the position of each said obstacle after the traversing process is finished.

Подробнее
20-01-2022 дата публикации

3D IMAGING

Номер: US20220020212A1

Aspects of the disclosure are directed to methods and apparatuses involving 3D imaging. In accordance with certain aspects, a plurality of images of a 3D object are captured while movement of the 3D object is sensed by a sensor physically coupled thereto. The plurality of images include complementary portions and a feature of the 3D object. Pose data is generated for one of the plurality of images, based on data corresponding to the sensed movement and on the feature. The complementary portions are combined to generate a visual 3D image of the 3D object, based on the pose data. 1. An apparatus comprising:imaging circuitry to generate a plurality of images, including complementary portions and a feature, of a 3D object having a sensor physically coupled thereto to sense movement of the 3D object; and for one of the plurality of images, generate pose data based on data corresponding to the sensed movement and on the feature, and', 'combining the complementary portions to generate a visual 3D image of the 3D object, based on the pose data., 'logic circuitry to2. The apparatus of claim 1 , wherein the complementary portions include surfaces of the 3D object claim 1 , and wherein the plurality of images include respective images depicting related ones of the complementary portions of the 3D object claim 1 , each of the respective images including an image of at least one surface portion not in another one of the respective images.3. The apparatus of claim 1 , wherein the logic circuitry is to output the visual 3D image to illustrate that the 3D object has planar surfaces claim 1 , with respective ones of the images depicting different planar surfaces and each having a complementary portion where the planar surfaces meet.4. The apparatus of claim 1 , wherein the logic circuitry is to omit the sensor from the visual 3D image claim 1 , based on physical characteristics of the sensor and the pose data.5. The apparatus of claim 1 , wherein the logic circuitry is to:identify ...

Подробнее
03-01-2019 дата публикации

APPARATUS AND METHODS FOR DISTANCE ESTIMATION USING STEREO IMAGERY

Номер: US20190007695A1
Автор: Richert Micah
Принадлежит:

Frame sequences from multiple image sensors may be combined in order to form, for example, an interleaved frame sequence. Individual frames of the combined sequence may be configured a by combination (e.g., concatenation) of frames from one or more source sequences. The interleaved/concatenated frame sequence may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices. 1. A method of determining motion information within a visual scene , the method comprising:producing a first composite frame and a second composite frame by combining images from a first plurality of images and a second plurality of images of the visual scene;producing an interleaved sequence of composite frames comprising the first and the second composite frames; andevaluating the interleaved sequence to determine the motion information;wherein individual images of the first and second pluralities of images are provided by first and second sensing apparatus, respectively, the second sensing apparatus being separated spatially from the first sensing apparatus.2. The method of claim 1 , wherein:the first composite frame is characterized by a first placement configuration of (i) an image from the first plurality of images, and (ii) an image the second plurality of images; andthe second composite frame is characterized by a second placement configuration of (i) an image from the first plurality of images and (ii) an image the second plurality of images;wherein the second placement is different from the first placement.3. The method of claim 2 , ...

Подробнее
14-01-2021 дата публикации

System and method of personalized navigation inside a business enterprise

Номер: US20210010813A1
Автор: Edward L. Hill
Принадлежит: Position Imaging Inc

Systems and methods for tracking movement of individuals through a building receive, by one or more RF nodes disposed near an entrance to the building, RF signals from RF-transmitting mobile devices carried by persons near the entrance, capture an image of the persons while they are near the entrance, determine an identity and relative distance of each RF-transmitting mobile device from each RF node based on information associated with the RF signals received by that RF node, detect humans in the image, determine a relative depth of each human in the image, and assign the identity of each RF-transmitting mobile device to one of the humans detected in the image based on the relative distance of each RF-transmitting mobile device from each RF node and the relative depth of each human in the image, thereby identifying each individual who to be tracked optically as that individual moves throughout the building.

Подробнее
09-01-2020 дата публикации

SIMULTANEOUS LOCATION AND MAPPING (SLAM) USING DUAL EVENT CAMERAS

Номер: US20200011668A1
Принадлежит:

A method for simultaneous localization and mapping (SLAM) employs dual event-based cameras. Event streams from the cameras are processed by an image processing system to stereoscopically detect surface points in an environment, dynamically compute pose of a camera as it moves, and concurrently update a map of the environment. A gradient descent based optimization may be utilized to update the pose for each event or for each small batch of events. 1. A method for simultaneous localization and mapping (SLAM) , comprising:receiving, from first and second image sensors, first and second event streams, respectively, of asynchronous events representing points of surfaces in an environment, wherein the first and second image sensors are arranged with overlapping fields of view;computing depths of the points represented by the first event stream based on relative pixel locations of common points represented by the second event stream; anddynamically computing a pose of at least the first image sensor with respect to a reference element in the environment, and updating a map of the environment, based at least on the points represented by the first event stream and the computed depths thereof.2. The method of claim 1 , wherein dynamically computing a pose comprises updating the pose for each K events of the first event stream claim 1 , where K is a predefined integer ≥1 and is at least one order of magnitude less than a total number of image capture elements of the first or second image sensor.3. The method of claim 2 , wherein K=1 claim 2 , whereby the pose is updated for each event of the first event stream.4. The method of claim 1 , further comprising running an optimization routine to optimize map locations.5. The method of claim 4 , wherein the optimization routine minimizes an error function using a first set of points represented by events of the first image sensor and a second set of points represented by events of the second image sensor.6. The method of claim 1 , ...

Подробнее
14-01-2021 дата публикации

DISTANCE MEASURING METHOD AND DEVICE

Номер: US20210012520A1
Автор: Liu Jie, Yan Jiaqi, ZHOU You
Принадлежит:

A method for measuring distance using an unmanned aerial vehicle (UAV) includes: identifying a target object to be measured; receiving a plurality of images captured by a camera of the UAV when the UAV is moving and the camera is tracking the target object; collecting movement information of the UAV corresponding to capturing moments of the plurality of images; and calculating a distance between the target object and the UAV based on the movement information and the plurality of images. 1. A method for measuring distance using an unmanned aerial vehicle (UAV) , comprising:identifying a target object to be measured;receiving a plurality of images captured by a camera of the UAV when the UAV is moving and the camera is tracking the target object;collecting movement information of the UAV corresponding to capturing moments of the plurality of images; andcalculating a distance between the target object and the UAV based on the movement information and the plurality of images.2. The method of claim 1 , wherein identifying the target object comprises:receiving an initial image containing the target object captured by the camera of the UAV; andidentifying the target object in the initial image.3. The method of claim 2 , wherein identifying the target object further comprises:displaying the initial image on a graphical user interface;obtaining a user selection of a target area in the initial image; andobtaining the target object based on the target area.4. The method of claim 3 , wherein the user selection comprises a single tap at a center of the target area claim 3 , a double tap at the center of the target area claim 3 , or a dragging operation having a starting point and an ending point that define a bounding box of the target area.5. The method of claim 3 , wherein identifying the target object comprises:obtaining super-pixels of the initial image by clustering pixels of the initial image based on image features of the pixels; obtaining a super-pixel partially located ...

Подробнее
09-01-2020 дата публикации

OBJECT DETECTION AND AVOIDANCE FOR AERIAL VEHICLES

Номер: US20200012842A1
Автор: Klaus Andreas
Принадлежит:

Aerial vehicles that are equipped with one or more imaging devices may detect obstacles that are small in size, or obstacles that feature colors or textures that are consistent with colors or textures of a landing area, using pairs of images captured by the imaging devices. Disparities between pixels corresponding to points of the landing area that appear within each of a pair of the images may be determined and used to generate a reconstruction of the landing area and a difference image. If either the reconstruction or the difference image indicates the presence of one or more obstacles, a landing operation at the landing area may be aborted or an alternate landing area for the aerial vehicle may be identified accordingly. 1. A method for operating an unmanned aerial vehicle , the method comprising:locating, by at least one imaging device provided on the unmanned aerial vehicle, a target marker on a surface beneath the unmanned aerial vehicle;defining a landing area based at least in part on at least a portion of the target marker, wherein the landing area comprises a geometric shape defined with respect to the portion of the target marker;causing the unmanned aerial vehicle to descend toward the surface beneath the unmanned aerial vehicle;capturing, by the at least one imaging device, a first image including at least a portion of the surface beneath the unmanned aerial vehicle, wherein the first image is captured while the unmanned aerial vehicle is descending toward the surface beneath the unmanned aerial vehicle;capturing, by the at least one imaging device, a second image including at least a portion of the surface beneath the unmanned aerial vehicle, wherein the second image is captured while the unmanned aerial vehicle is descending toward the surface beneath the unmanned aerial vehicle;determining disparities between pixels corresponding to at least a plurality of points depicted in the first image and pixels corresponding to at least the plurality of points ...

Подробнее
09-01-2020 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: US20200013178A1
Принадлежит:

An image processing apparatus comprises: an obtaining unit configured to obtain an image and distance information concerning a distance from an in-focus plane, which corresponds to each pixel included in the image; a setting unit configured to set an image processing condition according to the distance information based on an output characteristic of an output apparatus concerning a sharpness; and a processing unit configured to perform image processing for the image using the distance information obtained by the obtaining unit and the image processing condition set by the setting unit, wherein the processing unit changes, in accordance with the distance information, a band of a spatial frequency of the image to which the image processing is applied. 1. An image processing apparatus comprising:an obtaining unit configured to obtain an image and distance information concerning a distance from an in-focus plane, which corresponds to each pixel included in the image;a setting unit configured to set an image processing condition according to the distance information based on an output characteristic of an output apparatus concerning a sharpness; anda processing unit configured to perform image processing for the image using the distance information obtained by the obtaining unit and the image processing condition set by the setting unit,wherein the processing unit changes, in accordance with the distance information, a band of a spatial frequency of the image to which the image processing is applied.2. The apparatus according to claim 1 , wherein in a case in which the distance information indicates a first value claim 1 , the processing unit widens the band of the spatial frequency to which the image processing is applied as compared to a case in which the distance information indicates a second value larger than the first value.3. The apparatus according to claim 2 , whereinthe first value is a value within a range up to a predetermined distance from the in-focus ...

Подробнее
14-01-2021 дата публикации

PIXEL CIRCUIT AND METHOD OF OPERATING THE SAME IN AN ALWAYS-ON MODE

Номер: US20210013257A1
Автор: Dutton Neale
Принадлежит:

An embodiment method of operating an imaging device including a sensor array including a plurality of pixels, includes: capturing a first low-spatial resolution frame using a subset of the plurality of pixels of the sensor array; generating, using a processor coupled to the sensor array, a first depth map using raw pixel values of the first low-spatial resolution frame; capturing a second low-spatial resolution frame using the subset of the plurality of pixels of the sensor array; generating, using the processor, a second depth map using raw pixel values of the second low-spatial resolution frame; and determining whether an object has moved in a field of view of the imaging device based on a comparison of the first depth map to the second depth map. 1. An imaging device , comprising:a sensor array comprising an array of pixels;a row driver circuit coupled to the array of pixels and configured to select at least one row of the array of pixels;a column driver circuit coupled to the array of pixels and configured to select at least one column of the array of pixels; provide a first timing signal to the row driver circuit and the column driver circuit to select a subset of the array of pixels to capture a first low-spatial resolution frame; and', 'provide a second timing signal to the row driver circuit and the column driver circuit to select the same subset of the array of pixels to capture a second low-spatial resolution frame; and, 'a controller coupled to the row driver circuit and the column driver circuit, the controller being configured to generate a first depth map using raw pixel values of the first low-spatial resolution frame;', 'generate a second depth map using raw pixel values of the second low-spatial resolution frame;', 'determine whether an object has moved in a field of view of the imaging device based on a comparison of the first depth map to the second depth map., 'a processor coupled to receive an output of the array of pixels, wherein the processor ...

Подробнее
03-02-2022 дата публикации

VEHICLE POSITION IDENTIFICATION

Номер: US20220032982A1
Принадлежит:

A method for determining a location of a vehicle driving on a track is provided, the method comprising the step of obtaining a real-time image from an imaging device () located on the vehicle and deriving information from the obtained real-time image. The derived information is compared with a database comprising derived information from a plurality of images, each of the plurality of images being associated with a specific track segment. The closest match is then determined between a sequence of the real-time images to a sequence of the plurality of images, and the location of the vehicle on a track segment is identified based on the specific track segment associated with the closest matched sequence of images. The track segments are associated with a specific track amongst a set of parallel or closely spaced tracks. 1. A method for determining a location of a vehicle driving on a track , the method comprising:obtaining a real-time image from an imaging device located on the vehicle;deriving information from the obtained real-time image;comparing the derived information with a database comprising derived information from a plurality of images, each of the plurality of images being associated with a specific track segment;determining the closest match between a sequence of the real-time images to a sequence of the plurality of images;identifying the location of the vehicle on a track segment based on the specific track segment associated with the closest matched sequence of images;wherein the track segments are associated with a specific track amongst a set of parallel or closely spaced tracks.2. The method of claim 1 , wherein the vehicle is a train.3. The method of claim 1 , the method further comprising:establishing the database comprising derived information from the plurality of images.4. The method of claim 3 , wherein the establishing the database comprising derived information from the plurality of images further comprises:providing a video feed from the ...

Подробнее
10-01-2019 дата публикации

IMAGING PROCESSING APPARATUS, DISTANCE MEASURING APPARATUS AND PROCESSING SYSTEM

Номер: US20190014262A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to one embodiment, an image processing apparatus includes a memory and one or more processors. The one or more processors are electrically connected to the memory, and calculate blur correction information to make a blur of a first shape of an object approach a blur of a second shape of the object. The first shape of the object is contained in a first component image of one image. The second shape of the object is contained in a second component image of the one image. The one or more processors calculate a distance between an imaging device and the object based on an image distance when the one image is captured and the blur correction information. The image distance is a distance from a lens up to an image forming surface of the object. 1. An image processing apparatus comprising:a memory; and calculate blur correction information to make a blur of a first shape of an object approach a blur of a second shape of the object, the first shape of the object being contained in a first component image of one image, the second shape of the object being contained in a second component image of the one image; and', 'calculate a distance between an imaging device and the object based on an image distance when the one image is captured and the blur correction information, the image distance being a distance from a lens up to an image forming surface of the object., 'one or more processors electrically connected to the memory and configured to'}2. The image processing apparatus of claim 1 , wherein the one or more processors are configured to: 'select one temporary image distance from among two or more temporary image distances; and', 'select one shape model from among two or more shape models;'}calculate the distance between the imaging device and the object based on the blur correction information and the selected temporary image distance at which a three-dimensional shape of the object obtained based on a distance between the imaging device and the object has a ...

Подробнее
18-01-2018 дата публикации

SURVEYING SYSTEM

Номер: US20180017384A1
Принадлежит: HEXAGON TECHNOLOGY CENTER GMBH

A system is disclosed that comprises a camera module and a control and evaluation unit. The camera module is designed to be attached to the surveying pole and comprises at least one camera for capturing images. The control and evaluation unit has stored a program with program code so as to control and execute a functionality in which a series of images of the surrounding is captured with the at least one camera; a SLAM-evaluation with a defined algorithm using the series of images is performed, wherein a reference point field is built up and poses for the captured images are determined; and, based on the determined poses, a point cloud comprising 3D-positions of points of the surrounding can be computed by forward intersection using the series of images, particularly by using dense matching algorithm. 1. A surveying system adapted to determine a position of a position measuring resource being mounted on a surveying pole , the surveying system comprising:a camera module being attached to the surveying pole and comprising at least one camera for capturing images; capturing a series of images of the surrounding with the at least one camera when moving along a path through a surrounding, the series comprising an amount of images captured with different poses of the camera, the poses representing respective positions and orientations of the camera;', 'performing a SLAM-evaluation with a defined algorithm using the series of images, wherein a plurality of respectively corresponding image points are identified in each of several sub-groups of images of the series of images and, based on resection and forward intersection using the plurality of respectively corresponding image points, a reference point field comprising a plurality of reference points of the surrounding, wherein coordinates of the reference points are derived, and the poses for the images are determined;', 'retrieving determined positions from the surveying system that are adopted by the position measuring ...

Подробнее
03-02-2022 дата публикации

SYSTEM AND METHOD FOR OBSTACLE AVOIDANCE

Номер: US20220036574A1
Автор: Han Lei, HU XIAO, ZHANG Honghui
Принадлежит:

A method for acquiring an obstacle distance including determining a detection mode for detecting an obstacle distance of an obstacle; detecting the obstacle distance using the detection mode. Detecting the obstacle distance includes in response to determining a monocular mode as the detection mode: capturing a first image and a second image using a lens of an imaging device at two different times with a predetermined monocular imaging interval at two different locations; and calculating the obstacle distance via a monocular triangulation based on the first image, the second image, and a displacement of the imaging device between the two different times. Both the first image and the second image contain the obstacle. The predetermined monocular imaging interval varies based upon an altitude of the imaging device changes. 1. A method for acquiring an obstacle distance , comprising:determining a detection mode for detecting an obstacle distance of an obstacle; capturing a first image and a second image using a lens of an imaging device at two different times with a predetermined monocular imaging interval at two different locations, both the first image and the second image containing the obstacle, where the predetermined monocular imaging interval varies based upon an altitude of the imaging device changes; and', 'calculating the obstacle distance via a monocular triangulation based on the first image, the second image, and a displacement of the imaging device between the two different times., 'detecting the obstacle distance using the detection mode, including in response to determining a monocular mode as the detection mode2. The method of claim 1 , further comprising:determining the displacement of the imaging device between the two different times via an inertial measurement unit (IMU).3. The method of claim 1 , wherein determining the detection mode includes:selecting the detection mode from a plurality of detection modes including the monocular mode and a ...

Подробнее
03-02-2022 дата публикации

THREE-DIMENSIONAL OBJECT RECONSTRUCTION FROM A VIDEO

Номер: US20220036635A1
Принадлежит:

A three-dimensional (3D) object reconstruction neural network system learns to predict a 3D shape representation of an object from a video that includes the object. The 3D reconstruction technique may be used for content creation, such as generation of 3D characters for games, movies, and 3D printing. When 3D characters are generated from video, the content may also include motion of the character, as predicted based on the video. The 3D object construction technique exploits temporal consistency to reconstruct a dynamic 3D representation of the object from an unlabeled video. Specifically, an object in a video has a consistent shape and consistent texture across multiple frames. Texture, base shape, and part correspondence invariance constraints may be applied to fine-tune the neural network system. The reconstruction technique generalizes well—particularly for non-rigid objects. 1. A computer-implemented method of constructing a three-dimensional (3D) representation of an object , comprising:extracting, by an encoder portion of a neural network model, features from a video including images of the object captured from a camera pose;predicting, by the neural network model, a 3D shape representation of the object for a first image of the images based on a set of learned shape bases and the features;predicting, by a texture decoder portion of the neural network model, a texture flow for the first image based on the features extracted from the first image; andmapping pixels from the first image to a texture space according to the texture flow to produce a texture image that is invariant to shape deformation of the object, wherein transfer of the texture image onto the 3D shape representation constructs a 3D object corresponding to the object in the first image.2. The computer-implemented method of claim 1 , further comprising:predicting non-rigid motion deformations of the 3D shape representation for the first image; andapplying the non-rigid motion deformations to an ...

Подробнее
17-01-2019 дата публикации

Registration of three-dimensional coordinates measured on interior and exterior portions of an object

Номер: US20190017806A1
Принадлежит: Faro Technologies Inc

A dimensional measuring device includes an overview camera and a triangulation scanner. A six-DOF tracking device tracks the dimensional measuring device as the triangulation scanner measures three-dimensional (3D) coordinates on an exterior of the object. Cardinal points identified by the overview camera are used to register in a common frame of reference 3D coordinates measured by the triangulation scanner on the interior and exterior of the object.

Подробнее
17-01-2019 дата публикации

STATIONARY-VEHICLE STRUCTURE FROM MOTION

Номер: US20190019043A1
Принадлежит:

A vehicular structure from motion (SfM) system can store a number of image frames acquired from a vehicle-mounted camera in a frame stack according to a frame stack update logic. The SfM system can detect feature points, generate flow tracks, and compute depth values based on the image frames, the depth values to aid control of the vehicle. The frame stack update logic can select a frame to discard from the stack when a new frame is added to the stack, and can be changed from a first in, first out (FIFO) logic to last in, first out (LIFO) logic upon a determination that the vehicle is stationary. An optical flow tracks logic can also be modified based on the determination. The determination can be made based on a dual threshold comparison to insure robust SfM system performance. 1. An automotive system on a vehicle , comprising:a camera configured to generate a sequence of image frame; at least one processor; and', receive the sequence of image frames from the camera;', 'store, in the at least one non-transitory computer readable storage medium, a portion of the sequence of image frames in a frame stack, by selecting, according to a frame stack logic, a frame to discard from the frame stack, in response to adding a new frame to the frame stack;', 'compute depth values based on the frame stack;', 'modify the frame stack update logic from first in, first out (FIFO) logic to last in, first out (LIFO) logic, in response to determining that the vehicle is stationary; and', 'send, to a vehicle controller, the depth values; and, 'at least one non-transitory computer readable storage medium storing a program for execution by the at least one processor, the program including instructions to], 'a structure from motion (SfM) system coupled to the camera, the SfM system comprisingthe vehicle controller coupled to the SfM system, the vehicle controller configured to control the vehicle based on the depth values.2. The automotive system of claim 1 , wherein the instructions ...

Подробнее
21-01-2021 дата публикации

Method and Apparatus for Determining Relative Motion between a Time-of-Flight Camera and an Object in a Scene Sensed by the Time-of-Flight Camera

Номер: US20210018627A1
Принадлежит:

A method for determining relative motion between a time-of-flight camera and an object in a scene sensed by the time-of-flight camera is provided. The method includes receiving at least two sets of raw images of the scene from the time-of-flight camera, each set including at least one raw image. The raw images are based on correlations of a modulated reference signal and measurement signals of the time-of-flight camera. The measurement signals are based on a modulated light signal emitted by the object. The method includes determining, for each set of raw images, a value indicating a respective phase difference between the modulated light and reference signals based on the respective set of raw images, and determining information about relative motion between the time-of-flight camera and object based on the values indicating the phase differences. The method includes outputting the information about relative motion between the time-of-flight camera and the object. 1. A method for determining relative motion between a time-of-flight camera and an object in a scene sensed by the time-of-flight camera , wherein the object emits a modulated light signal , the method comprising:receiving at least two sets of raw images of the scene from the time-of-flight camera, wherein the at least two sets of raw images each comprise at least one raw image, wherein the raw images are based on correlations of a modulated reference signal and measurement signals of the tune-of-flight camera, and wherein the measurement signals are based on the modulated light signal emitted by the object;determining, for each set of raw images, a value indicating a respective phase difference between the modulated light signal and the modulated reference signal based on the respective set of raw images;determining information about relative motion between the time-of-flight camera and the object based on the values indicating the phase differences; andoutputting the information about relative motion ...

Подробнее
21-01-2021 дата публикации

IMAGING DEVICE, DISTANCE MEASUREMENT METHOD, DISTANCE MEASUREMENT PROGRAM, AND RECORDING MEDIUM

Номер: US20210019899A1
Автор: ONO Shuji
Принадлежит: FUJIFILM Corporation

There are provided an imaging device, a distance measurement method, a distance measurement program, and a recording medium capable of accurately measuring a distance to a subject without depending on a color of the subject. 1. An imaging device comprising:a multifocal imaging lens having different focusing distances in a first region and a second region;an image sensor having a plurality of pixels formed of photoelectric conversion elements arranged two-dimensionally and having a first pixel and a second pixel that respectively pupil-divide and selectively receive luminous flux incident through the first region of the imaging lens and a third pixel and a fourth pixel that respectively pupil-divide and selectively receive luminous flux incident through the second region of the imaging lens;a first image acquisition unit that acquires a first image having an asymmetric blur from at least one pixel of the first pixel or the second pixel of the image sensor;a second image acquisition unit that acquires a second image having an asymmetric blur from at least one pixel of the third pixel or the fourth pixel of the image sensor;a third image acquisition unit that adds pixel values of adjacent pixels of the first pixel and the second pixel of the image sensor to acquire a third image having a symmetric blur;a fourth image acquisition unit that adds pixel values of adjacent pixels of the third pixel and the fourth pixel of the image sensor to acquire a fourth image having a symmetric blur;a first distance calculation unit that calculates a distance to a subject in an image based on the acquired first image and third image; anda second distance calculation unit that calculates a distance to a subject in an image based on the acquired second image and fourth image.2. An imaging device comprising:a multifocal imaging lens having different focusing distances in a first region and a second region;an image sensor having a plurality of pixels formed of photoelectric conversion ...

Подробнее
21-01-2021 дата публикации

RECORDING MEDIUM, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, AND OBJECT DETECTION SYSTEM

Номер: US20210019900A1
Автор: Ohta Isao
Принадлежит: SQUARE ENIX CO., LTD.

An apparatus comprises: acquiring depth information regarding an object that is present in a real space, the depth information indicating a distribution of distances to points on a surface of the object in a depth direction relative to a position; developing the points on the surface of the object, within a three-dimensional space that corresponds to the real space, based on the depth information; classifying the developed points on the surface of the object into any of a plurality of cells of a volume grid, divided from the three-dimensional space and each having a predetermined size; and determining, as a space in which the object is present, a space of the three-dimensional space and corresponding to cells into which a greater number of points on the surface of the object than a threshold value are classified. 1. A non-transitory computer-readable recording medium on which a program is recorded , the program causing a computer to carry out:processing through which depth information regarding an object that is present in a real space is acquired, the depth information indicating a distribution of distances to points on a surface of the object in a depth direction relative to a predetermined position;processing through which the points on the surface of the object are developed within a three-dimensional space that corresponds to the real space, based on the acquired depth information;processing through which the developed points on the surface of the object are classified into any of a plurality of cells of a volume grid, divided from the three-dimensional space and each having a predetermined size; andprocessing through which a space included in the three-dimensional space and corresponding to cells into which a greater number of points on the surface of the object than a predetermined threshold value are classified is determined as a space in which the object is present.2. The recording medium according to claim 1 ,wherein the depth information is intermittently ...

Подробнее
17-01-2019 дата публикации

CONTROL APPARATUS, IMAGE CAPTURING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: US20190020826A1
Принадлежит:

A control apparatus () includes a focus detection unit () that detects a defocus amount, a control unit () that automatically changes a parameter relating to a tracking operation during the tracking operation depending on an image capturing state, and a focusing unit () that preforms focusing based on the defocus amount and the parameter. 1. A control apparatus comprising:a focus detection unit configured to detect a defocus amount;a control unit configured to automatically change a parameter relating to a tracking operation during the tracking operation depending on an image capturing state; anda focusing unit configured to preform focusing based on the defocus amount and the parameter.2. The control apparatus according to claim 1 , wherein the image capturing state is a motion of an object.3. The control apparatus according to claim 1 , wherein the image capturing state is a motion of an image capturing apparatus.4. The control apparatus according to claim 1 , wherein the image capturing state is a relationship between a motion of an object and a motion of an image capturing apparatus.5. The control apparatus according to claim 1 , wherein the control unit is configured to change the parameter during a half-press operation of a release button.6. The control apparatus according to claim 1 , wherein the control unit is configured to change the parameter during focus detection by the focus detection unit.7. The control apparatus according to claim 1 , wherein the control unit is configured to change the parameter in real time during the tracking operation.8. The control apparatus according to claim 1 , wherein: a setting unit configured to set, as the parameter, each of set values of a plurality of items relating to the tracking operation,', 'a single mode selection unit capable of selecting one of an automatic mode in which each of the set values of the plurality of items relating to the tracking operation is automatically set or a manual mode in which each of the ...

Подробнее
16-01-2020 дата публикации

IMAGING APPARATUS

Номер: US20200021745A1
Автор: OGURA MOTONORI
Принадлежит:

An imaging apparatus includes: an image sensor capturing an image of a subject via an optical system including a focus lens, and generating image data; a distance measuring part calculating a subject distance and a movement distance of the subject by using the generated image data; and a controller that performs an auto-focus action by using the calculated subject distance or movement distance. The distance measuring part uses: first image data when the subject lies at the first position; second image data when the subject lies at the second position; and a PSF of the optical system corresponding to a focused position of the subject lying at the first position, thereby finding a distance between the first position and the second position, and calculating the subject distance and the movement distance of the subject for an image indicated by the second image data based on the found distance. 1. An imaging apparatus comprising:an image sensor that captures an image of a subject formed via an optical system including a focus lens, to generate image data;a distance measuring part configured to calculate a subject distance representing the distance to the subject and a movement distance of the subject by using the image data generated by the image sensor; anda controller configured to perform an auto-focus action by using the calculated subject distance or movement distance of the subject,wherein the distance measuring part using: first image data generated by the image sensor when the subject lies at a first position; second image data generated by the image sensor when the subject lies at a second position; and a point spread function of the optical system corresponding to a focused position of the subject lying at the first position, thereby finding a distance between the first position and the second position, and calculating the subject distance and the movement distance of the subject for an image indicated by the second image data based on the found distance.2. ...

Подробнее
26-01-2017 дата публикации

System and method for determining motion and structure from optical flow

Номер: US20170024900A1
Автор: Davi Geiger

A method and system for extracting motion and structure from a sequence of images stored on a computer system, the method comprises obtaining images including a set of three-dimensional (3D) points over a plurality of frames, determining an instantaneous motion of the set of 3D points by an angular velocity and a translation with respect to an axis of rotation, computing an optical flow using the instantaneous motion of the set of 3D points based on a projection of velocity of the set of 3D points, computing a depth of the set of 3D points from the optical flow, and determining an epipolar line in the images using the optical flow.

Подробнее
24-01-2019 дата публикации

Apparatus for Three-Dimensional Measurement of an Object, Method and Computer Program with Image-based Triggering

Номер: US20190025049A1
Принадлежит:

An apparatus for three-dimensional measurement of an object includes a trigger configured to obtain image information from a measurement camera and to trigger, in dependence on image content of the image information, a measurement output or an evaluation of the image information by an evaluator for determining measurement results. Further, a respective method and a respective computer program are described. 1. Apparatus for three-dimensional measurement of an object , comprising:a trigger configured to acquire image information from a measurement camera and to trigger, in dependence on image content of the image information, forwarding of the image information to an evaluator for determining measurement results or an evaluation of the image information by an evaluator for determining measurement results;wherein the trigger is configured to detect when the image content has shifted with respect to a reference image content by at least a predetermined shift or by more than a predetermined shift and to trigger, in dependence on the detection of a shift, forwarding of the image information or the evaluation of the image information by the evaluator for determining measurement results.2. Apparatus according to claim 1 , wherein the trigger is configured to trigger claim 1 , in dependence on the detection of a shift claim 1 , forwarding of the image information or the evaluation of the image information by the evaluator for determining measurement results claim 1 , in order to generate measurement results at a specific spatial distance or to obtain measurement results at equal spatial distances.3. Apparatus according to claim 1 , wherein triggering the measurement output is performed exclusively based on the image content.4. Apparatus according to claim 1 , wherein the trigger is configured to perform image analysis and to trigger the measurement output or the evaluation of the image information in dependence on the image analysis.5. Apparatus according to claim 1 , ...

Подробнее
10-02-2022 дата публикации

Textured mesh building

Номер: US20220044479A1
Принадлежит: Snap Inc

Systems and methods are provided for receiving a two-dimensional (2D) image comprising a 2D object; identifying a contour of the 2D object; generating a three-dimensional (3D) mesh based on the contour of the 2D object; and applying a texture of the 2D object to the 3D mesh to output a 3D object representing the 2D object.

Подробнее
24-01-2019 дата публикации

THREE DIMENSIONAL SYMBOL FROM TWO DIMENSIONAL STILL IMAGES

Номер: US20190026913A1
Автор: Courtelis Kiki Lisa
Принадлежит:

The present invention provides a method and device for creating and displaying a symbol comprising a two dimensional representation of a point of light that simulates depth and movement in three dimensions. The symbol is printed onto a substrate and applied as patches, magnets, stickers, cards and posters, and affixed to other surfaces such as clothing and accessories, ornaments, vehicles, building structures and other surfaces to be representative of hope and optimism. Multiple copies of symbol are applied to a three dimensional ornament to simulate a point of light in three dimensions. 1. A symbol comprising a flat representation of a point of light with rays emanating outwardly simulating depth and movement in three dimensions comprising:Multiple two dimensional images of a point of light with rays emanating outwardly printed on a top side of a substrate;Multiple lenses through which said two dimensional images are visible to a viewer;Wherein slight relative movement between said viewer and said symbol alters the perception to said viewer to create the appearance of depth and movement in three dimensions.2. The symbol as set forth in wherein said multiple lenses further comprise means for altering refraction and reflection of light passing through said lenses to create the illusion of glowing perceived by the viewer.3. The symbol as set forth in wherein said two dimensional images and said multiple lenses are mounted on a top layer of a substrate.4. The symbol as set forth in wherein said substrate has an underside opposite said top layer and means for affixing said symbol to an article.5. The symbol as set forth in wherein said means for affixing comprises chemical adhesive.6. The symbol as set forth in wherein said chemical adhesive comprises adhesive that adheres to an article and is removable and reusable upon application of force.7. The symbol as set forth in wherein said means for affixing comprises a magnet for affixation to a ferromagnetic article.8. The ...

Подробнее
24-01-2019 дата публикации

DENSE VISUAL SLAM WITH PROBABILISTIC SURFEL MAP

Номер: US20190026943A1
Автор: Ren Liu, Yan Zhixin, Ye Mao
Принадлежит:

A novel map representation called Probabilistic Surfel Map (PSM) for dense visual SLAM is disclosed. The PSM maintains a globally consistent map with both photometric and geometric uncertainties encoded in order to address inconsistency and sensor noise. A key aspect of the visual SLAM method disclosed herein is the proper modeling and updating of the photometric and geometric uncertainties encoded in the PSM. Strategies for applying the PSM for improving both the front-end pose estimation and the back-end optimization are disclosed. Moreover, the PSM enables generation of a high quality dense point cloud with high accuracy. 1. A method for visual simultaneous localization and mapping , the method comprising:storing, in a memory, a first data structure having a plurality of surfels representing a 3D environment, each surfel having a 3D position, an uncertainty of the 3D position, an intensity, an uncertainty of the intensity, and a surface normal, the 3D position of each surfel being in a first coordinate space;receiving, with a processor, a first image of the 3D environment from a camera, the first image having an array of pixels, each pixel having an intensity and a depth;estimating, with the processor, a first camera pose from which the camera captured the first image based on the first image and the first data structure; andupdating, with the processor, at least one surfel in the plurality of surfels of the first data structure based on the first image and the first camera pose.2. The method according to claim 1 , the estimating the first camera pose further comprising:calculating the first camera pose that transforms surfels in the plurality of surfels of the first data structure from the first coordinate space to a coordinate space of the first image with minimal photometric error and geometric error,wherein the geometric error is a difference between the 3D positions of respective surfels in the plurality of surfels of the first data structure and the depth ...

Подробнее
24-01-2019 дата публикации

IMAGE PROCESSING METHOD, DISPLAY DEVICE, AND INSPECTION SYSTEM

Номер: US20190026955A1
Принадлежит:

A coefficient to transform a three-dimensional mesh approximating at least a part of a three-dimensional model including at least a part of a target object and generated from a plurality of two-dimensional images, into two-dimensional panoramic coordinates is determined. A first position on a first image determined from a plurality of two-dimensional images corresponding to a portion of the two-dimensional panoramic coordinates, and an annotation to be projected onto the two-dimensional panoramic coordinates are specified according to the first image. A second position corresponding to projection of the annotation onto the two-dimensional panoramic coordinates is determined. The annotation is superimposed on a second image obtained by projecting the first image onto the two-dimensional panoramic coordinates. A third position corresponding to projection of the first position onto a third image is determined, and the annotation is projected and superimposed at the third position on the third image. 125-. (canceled)26. An image processing method comprising:generating a three-dimensional model including at least a part of a target object, from a plurality of two-dimensional images;approximating at least a part of the three-dimensional model by a three-dimensional mesh;determining a coefficient to transform the three-dimensional mesh into two-dimensional panoramic coordinates;determining a first image from a plurality of two-dimensional images corresponding to a portion of the two-dimensional panoramic coordinates;specifying contents to be projected onto the two-dimensional panoramic coordinates, and a first position on the first image, according to the first image;determining a second position corresponding to projection of the contents onto the two-dimensional panoramic coordinates;storing the second position and the contents in association with each other;superimposing the contents, as an annotation, on a second image obtained by projecting the first image onto the ...

Подробнее
23-01-2020 дата публикации

AUTOMATED SPATIAL INDEXING OF IMAGES BASED ON FLOORPLAN FEATURES

Номер: US20200027267A1
Принадлежит:

A spatial indexing system receives a sequence of images depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model. 1. A method comprising:receiving a sequence of images from an image capture system, the sequence of images captured by a camera of the image capture system as the image capture system is moved along a camera path through an environment;generating a first estimate of the camera path, the first estimate of the camera path specifying, for images in the sequence of images, a position of the image relative to a reference point;obtaining a floorplan of the environment, the floorplan specifying positions of a plurality of physical features in the environment;generating a combined estimate of the camera path based on the first estimate of the camera path and the positions of the plurality of physical features specified in the floorplan at least in part by generating a grid map based on the floorplan, the grid map comprising a plurality of nodes and edges, each of the edges connecting a first node and a second node of the plurality of nodes; andautomatically generating an immersive model of the environment based on the combined estimate of the camera path and received sequence of images, the immersive model specifying, for each image of a plurality of the images, a location of the image within the floorplan and at least one route vector defining a spatial distance between the image and at least one of the other images of the plurality of images.2. The method of claim 1 , wherein the camera is a 360-degree camera and the images are 360-degree images.3. The method of claim 1 , wherein the first estimate of the camera ...

Подробнее
10-02-2022 дата публикации

Deep Learning Model for Auto-Focusing Microscope Systems

Номер: US20220046180A1
Принадлежит: Nanotronics Imaging Inc

A computing system receives, from an image sensor, at least two images of a specimen positioned on a specimen stage of a microscope system. The computing system provides the at least two images to an autofocus model for detecting at least one distances to a focal plane of the specimen. The computing system identifies, via the autofocus model, the at least one distance to the focal plane of the specimen. Based on the identifying, the computing system automatically adjusts a position of the specimen stage with respect to an objective lens of the microscope system.

Подробнее
10-02-2022 дата публикации

Method for Improved Acquisition of Images for Photogrammetry

Номер: US20220046189A1
Автор: Aaron M. Benzel
Принадлежит: Individual

A method for improved image acquisition for photogrammetry includes focusing a camera on one end of an object, capturing one or more images of the object, incrementally adjusting the focal length of the camera toward the opposite end of the object, and capturing images at each new focal length. Once the object has been photographed at varying focal lengths that run the entire length of the object, the multitude of images are then combined using focus stacking to create a singular image that is more in focus for the entire length of the object. A method for utilizing thermographic cameras to aid in the acquisition of images for photogrammetry includes applying thermal textures to the object and isolating an object from the background due to thermal differences.

Подробнее
23-01-2020 дата публикации

TAXI STRIKE ALERT SYSTEM

Номер: US20200027361A1
Принадлежит:

A system includes a first smart sensor, a second smart sensor, and at least one image processor. The first smart sensor is configured to sense light in a forward direction and to capture an image during a first time period. The second sensor is configured to sense light in the forward direction and to capture a second image during the first time period. The at least one image processor is configured to identify at least one object in the first and second image, to determine a first size of the at least one object in the first image and a second size of the at least one object in the second image, and to determine a distance of the at least one object from the aircraft based upon the first size and the second size. 1. A system comprising:a first smart sensor attachable to a front portion of an aircraft along a longitudinal axis of the aircraft, the first smart sensor configured to sense light in a forward direction when attached to the aircraft and to capture an image during a first time period;a second smart sensor attachable to a rear portion of an aircraft along the longitudinal axis of the aircraft, the second smart sensor configured to sense light in the forward direction and to capture a second image during the first time period; andat least one image processor configured to identify at least one object in the first and second images, to determine a first size of the at least one object in the first image and a second size of the at least one object in the second image, and to determine a distance of the at least one object from the aircraft based upon the first size and the second size.2. The system of claim 1 , wherein the first and second smart sensors are calibrated such that the first size and the second size are equal when the at least one object is at a calibration distance from the system claim 1 , the first size is larger than the second size when the at least one object is closer than the calibration distance from the system claim 1 , and the first ...

Подробнее
28-01-2021 дата публикации

VIDEO DEPTH ESTIMATION BASED ON TEMPORAL ATTENTION

Номер: US20210027480A1
Принадлежит:

A method of depth detection based on a plurality of video frames includes receiving a plurality of input frames including a first input frame, a second input frame, and a third input frame respectively corresponding to different capture times, convolving the first to third input frames to generate a first feature map, a second feature map, and a third feature map corresponding to the different capture times, calculating a temporal attention map based on the first to third feature maps, the temporal attention map including a plurality of weights corresponding to different pairs of feature maps from among the first to third feature maps, each weight of the plurality of weights indicating a similarity level of a corresponding pair of feature maps, and applying the temporal attention map to the first to third feature maps to generate a feature map with temporal attention. 1. A method of depth detection based on a plurality of video frames , the method comprising:receiving a plurality of input frames comprising a first input frame, a second input frame, and a third input frame respectively corresponding to different capture times;convolving the first to third input frames to generate a first feature map, a second feature map, and a third feature map corresponding to the different capture times;calculating a temporal attention map based on the first to third feature maps, the temporal attention map comprising a plurality of weights corresponding to different pairs of feature maps from among the first to third feature maps, each weight of the plurality of weights indicating a similarity level of a corresponding pair of feature maps; andapplying the temporal attention map to the first to third feature maps to generate a feature map with temporal attention.2. The method of claim 1 , wherein the plurality of weights are based on a learnable value.5. The method of claim 1 , wherein the input frames are video frames of an input video sequence.6. The method of claim 1 , wherein ...

Подробнее
31-01-2019 дата публикации

IDENTIFICATION OF AREAS OF INTEREST DURING INTRAORAL SCANS

Номер: US20190029524A1
Принадлежит:

A processing device performs image registration between a plurality of intraoral images of a dental site. The processing device identifies a candidate intraoral area of interest from a first intraoral image of the plurality of intraoral images. The processing device verifies the candidate intraoral area of interest as an intraoral area of interest based on a comparison of a second intraoral image to the first intraoral image. The processing device displays a view of the dental site where the intraoral area of interest is hidden and an indication of the hidden intraoral area of interest is visible. 1. A computer readable medium comprising instructions that , when executed by a processing device , cause the processing device to perform operations comprising:performing image registration between a plurality of intraoral images of a dental site;identifying a candidate intraoral area of interest from a first intraoral image of the plurality of intraoral images;verifying the candidate intraoral area of interest as an intraoral area of interest based on a comparison of a second intraoral image to the first intraoral image; anddisplaying a view of the dental site where the intraoral area of interest is hidden and an indication of the hidden intraoral area of interest is visible.2. The computer readable medium of claim 1 , the operations further comprising:determining a value associated with the candidate intraoral area of interest;determining a threshold; andverifying the candidate intraoral area of interest as an intraoral area of interest responsive to determining that a) a second intraoral image confirms the candidate intraoral area of interest and b) the value is above the threshold.3. The computer readable medium of claim 2 , wherein determining the threshold comprises selecting the threshold based on one or more patient case details claim 2 , wherein the one or more patient case details comprise a procedure for the dental site.4. The computer readable medium of claim ...

Подробнее
01-02-2018 дата публикации

IMAGE PROCESSING METHOD AND APPARATUS FOR DETERMINING DEPTH WITHIN AN IMAGE

Номер: US20180033157A1
Принадлежит:

An image processing method and apparatus for determining depth in an original image captured by a light field image capture device, in which a light field analysis algorithm is applied a plurality of times to the original image, changing the focus setting each time, so as to generate a respective plurality of scene images focused at different depths; edge detection is performed in respect of each of the scene images to generate a respective plurality of edge detected images; area identification is performed in respect of each edge detected image to generate a respective plurality of area identification images indicative of areas of respective edge detected images in which edges have been detected; and the area identification images are applied to respective scene images so as to extract from the scene images respective image segments corresponding to the areas in which edges have been detected. 1. An image processing method for determining depth in an original image captured by a light field image capture device , the method comprising:applying a light field analysis algorithm a plurality of times to said original image, changing the focus setting each time, so as to generate a respective plurality of scene images focused at different depths;performing edge detection in respect of each of said scene images to generate a respective plurality of edge detected images;performing area identification in respect of each edge detected image to generate a respective plurality of area identification images indicative of areas of respective edge detected images in which edges have been detected; andapplying said area identification images to respective scene images so as to extract from said scene images respective image segments corresponding to said areas in which edges have been detected;wherein said plurality of scene images is generated by applying a different respective focus setting to said original image and storing the image thus generated, wherein the focus setting ...

Подробнее
05-02-2015 дата публикации

Impact time from image sensing

Номер: US20150035990A1
Принадлежит: Sick IVP AB

Impact time between an image sensing circuitry and an object relatively moving at least partially towards, or away from, the image sensing circuitry can be computed. Image data associated with a respective image frame of a sequence (1 . . . N) of image frames sensed by said image sensing circuitry and which image frames are imaging said object can be received. For each one (i) of multiple pixel positions, a respective duration value (f(i)) indicative of a largest duration of consecutively occurring local extreme points in said sequence (1 . . . N) of image frames can be computed. A local extreme point is present in a pixel position (i) when an image data value of the pixel position (i) is a maxima or minima in relation to image data values of those pixel positions that are closest neighbours to said pixel position (i).

Подробнее
04-02-2021 дата публикации

SYSTEM AND METHOD FOR AUTONOMOUS EXPLORATION FOR MAPPING UNDERWATER ENVIRONMENTS

Номер: US20210031891A1
Принадлежит:

Embodiments of the present disclosure are directed towards a system and method for performing an inspection of an underwater environment. Embodiments may include providing an autonomous underwater vehicle (“AUV”) and performing an inspection of an underwater environment using the AUV. Embodiments may further include acquiring real-time sensor data during the inspection of the underwater environment and applying an active simultaneous localization and mapping (“SLAM”) algorithm during the inspection, wherein applying includes estimating one or more virtual landmarks based upon, at least in part, at least one past measurement and a current estimate of AUV activity. 1. A method for performing an inspection of an underwater environment comprising:providing an autonomous underwater vehicle (“AUV”);performing an inspection of an underwater environment using the AUV;acquiring real-time sensor data during the inspection of the underwater environment; andapplying an active simultaneous localization and mapping (“SLAM”) algorithm during the inspection, wherein applying includes estimating one or more virtual landmarks based upon, at least in part, at least one past measurement and a current estimate of AUV activity.2. The method for performing an inspection of an underwater environment of claim 1 , wherein the AUV includes a sensor configuration consisting of one or more multibeam sonars claim 1 , lidars claim 1 , and cameras.3. The method for performing an inspection of an underwater environment of claim 1 , further comprising:using one or more fiducial markers to facilitate localization and mapping of an infrastructure by the AUV.4. The method for performing an inspection of an underwater environment of claim 1 , further comprising:segmenting three-dimensional (“3D”) data associated with the real-time sensor data for at least one of segment, object, and place recognition.5. The method for performing an inspection of an underwater environment of claim 1 , further comprising: ...

Подробнее
01-05-2014 дата публикации

Systems and Methods of Merging Multiple Maps for Computer Vision Based Tracking

Номер: US20140119598A1
Принадлежит: Qualcomm Inc

Method, apparatus, and computer program product for merging multiple maps for computer vision based tracking are disclosed. In one embodiment, a method of merging multiple maps for computer vision based tracking comprises receiving a plurality of maps of a scene in a venue from at least one mobile device, identifying multiple keyframes of the plurality of maps of the scene, and merging the multiple keyframes to generate a global map of the scene.

Подробнее
02-02-2017 дата публикации

Method and apparatus for obtaining an image with motion blur

Номер: US20170034429A1
Принадлежит: Alcatel Lucent SAS

Method for obtaining an image containing a portion with motion blur, comprising: controlling at least one camera to take a first, second and third picture in a determined order of an object and a background, such that said first picture is taken with a first exposure time, said second picture with a second exposure time, and said third picture with a third exposure time, said second exposure time being longer than said first and said third exposure time, such that said second picture contains a blurred image of the background and/or the object if said object and/or said background is moving with respect to said at least one camera; generating a final image containing at least a portion of said blurred image of the second picture as well as well as a portion derived from said first and/or third picture, using said first, second and third picture.

Подробнее
17-02-2022 дата публикации

Providing a scene with synthetic contrast

Номер: US20220051401A1
Автор: Tobias Lenich
Принадлежит: Siemens Healthcare GmbH

A computer-implemented method for providing a scene with synthetic contrast includes receiving preoperative image data of an examination region containing a hollow organ, wherein the medical image data images a contrast agent flow in the hollow organ; receiving intraoperative image data of the examination region of the examination subject, wherein the intraoperative image data images a medical object at least partially disposed in the hollow organ, generating the scene with synthetic contrast by applying a trained function to input data, wherein the input data is based on the preoperative image data and the intraoperative image data, wherein the scene with synthetic contrast images a virtual contrast agent flow in the hollow organ taking into account the medical object disposed therein, wherein at least one parameter of the trained function is based on a comparison between a training scene and a comparison scene; and providing the scene with synthetic contrast.

Подробнее
31-01-2019 дата публикации

COLLECTING AND VIEWING THREE-DIMENSIONAL SCANNER DATA IN A FLEXIBLE VIDEO FORMAT

Номер: US20190035053A1
Принадлежит:

A method interactively displays panoramic images of a scene. The method includes measuring 3D coordinates with a 3D measuring instrument at a first position and a second position. The 3D coordinates are registering into a common frame of reference. Within the scene, a trajectory includes a plurality of trajectory points. Along the trajectory, 2D images are generated from the commonly registered 3D coordinates. A trajectory display mode sequentially displays a collection of 2D images at the trajectory points. A rotational display mode allows a user to select a desired view direction at a given trajectory point. The user selects the trajectory display mode or the rotational display mode and sees the result shown on the display device. 1. A method of interactively displaying panoramic images of a scene , the method comprising:measuring a first plurality of 3D coordinates with a 3D measuring instrument at a first position;measuring a second plurality of 3D coordinates with the 3D measuring instrument at a second position different than the first position;registering the first plurality of 3D coordinates and the second plurality of 3D coordinates together in a common frame of reference;providing a trajectory within the scene, the trajectory including a collection of trajectory points;generating along the trajectory a plurality of two-dimensional (2D) images at each trajectory point; anddisplaying the 2D image on a display device in one of a rotational display mode or a trajectory display mode in response to an input from a user, the trajectory display mode being configured to display the 2D images sequentially along the trajectory points, the rotational display mode being configured to display the 2D images at a single trajectory point from a user-defined view direction.2. The method of claim 1 , further comprising providing a user control.3. The method of claim 2 , wherein the user-defined view direction is selected from a plurality of observer view directions.4. The ...

Подробнее
04-02-2021 дата публикации

METHOD FOR AUTONOMOUS DETECTION OF CROP LOCATION BASED ON TOOL DEPTH AND LOCATION

Номер: US20210034057A1
Принадлежит:

A method for detecting real lateral locations of target plants includes: recording an image of a ground area at a camera; detecting a target plant in the image; accessing a lateral pixel location of the target plant in the image; for each tool module in a set of tool modules arranged behind the camera and in contact with a plant bed: recording an extension distance of the tool module; and recording a lateral position of the tool module relative to the camera; estimating a depth profile of the plant bed proximal the target plant based on the extension distance and the lateral position of each tool module; estimating a lateral location of the target plant based on the lateral pixel location of the target plant and the depth profile of the plant bed surface proximal the target plant; and driving a tool module to a lateral position aligned with the lateral location of the target plant. 1. A method comprising , at an autonomous machine:capturing a first image of a ground area of a plant bed surface via a ground-facing camera arranged on the autonomous machine;detecting a first target plant in the first image;accessing a lateral pixel location of the first target plant in the first image;via a depth sensor arranged on the autonomous machine, estimating a depth of a subregion of the ground area; capturing an extension distance of the tool module; and', 'capturing a lateral position of the tool module relative to the ground-facing camera;, 'for each tool module in a set of tool modules in contact with the plant bed surface and arranged behind the ground-facing camera relative to a direction of forward motion of the autonomous machineestimating a surface profile of the plant bed surface based on the extension distance of each tool module in the set of tool modules and the lateral position of each tool module in the set of tool modules;estimating a depth profile based on the surface profile and the depth of the subregion of the ground area;estimating a real lateral location ...

Подробнее
04-02-2021 дата публикации

Augmented reality system capable of manipulating an augmented reality object

Номер: US20210034870A1
Автор: Tae Jin HA
Принадлежит: Virnect Inc

An augmented reality system according to the present invention comprises a mobile terminal which, in displaying a 3D virtual image on a display, displays a dotted guide along the boundary of characters displayed on the display and when handwriting is detected along the dotted guide, recognizes the characters and displays a virtual object corresponding to the content of the characters, wherein, if the virtual object is touched, a pre-configured motion of the virtual object corresponding to the touched area is reproduced.

Подробнее
31-01-2019 дата публикации

FAST FOCUS USING DUAL CAMERAS

Номер: US20190037128A1
Автор: Shan Jizhang, Wang Chao
Принадлежит:

A method of focusing dual cameras including receiving a fixed focus image from a fixed focus camera module, receiving an auto focus image from an auto focus camera module, calibrating a lens distortion of the auto focus camera module, calibrating a geometric relation of the auto focus camera module and the fixed focus camera module, calculating a depth of focus difference between the fixed focus image and the auto focus image, estimating an auto focus position based on the depth of focus difference and setting the auto focus position based on the estimation. 1. A method of focusing dual cameras , comprising:receiving a fixed focus image from a fixed focus camera module;receiving an auto focus image from an auto focus camera module;calculating a depth of focus difference between the fixed focus image and the auto focus image;estimating an auto focus position based on the depth of focus difference; andsetting the auto focus position based on the estimation.2. The method of focusing dual cameras of claim 1 , further comprising calibrating a lens distortion of the auto focus camera module.3. The method of focusing dual cameras of claim 1 , further comprising calibrating a geometric relation of the auto focus camera module and the fixed focus camera module.4. The method of focusing dual cameras of claim 3 , wherein the geometric relation comprises a focal length ratio between the auto focus camera module and the fixed focus camera module.5. The method of focusing dual cameras of claim 3 , wherein the geometric relation comprises a set of rotation angles between the auto focus camera module and the fixed focus camera module.6. The method of focusing dual cameras of claim 3 , wherein the geometric relation comprises an optical center shift between the auto focus camera module and the fixed focus camera module.7. The method of focusing dual cameras of claim 1 , further comprising window matching between the auto focus camera module and the fixed focus camera module.8. A ...

Подробнее
31-01-2019 дата публикации

Method and apparatus for processing image

Номер: US20190037196A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus for processing an image includes: a first display of which an optical focal distance is a first distance; a second display of which an optical focal distance is a second distance; a processor configured to determine a first value of a first pixel of the first display and a second value of a second pixel of the second display according to a depth value of a first image to be output; and an image converging member configured to overlap the first pixel and the second pixel and output the first image corresponding to the depth value.

Подробнее
30-01-2020 дата публикации

CONSTRUCTING A USER'S FACE MODEL USING PARTICLE FILTERS

Номер: US20200036961A1
Автор: Surkov Sergey
Принадлежит:

Constructing a user's face model using particle filters is disclosed, including: using a first particle filter to generate a new plurality of sets of extrinsic camera information particles corresponding to respective ones of a plurality of images based at least in part on a selected face model particle; selecting a subset of the new plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of images; and using a second particle filter to generate a new plurality of face model particles corresponding to the plurality of images based at least in part on the selected subset of the new plurality of sets of extrinsic camera information particles. 1. (canceled)2. A system , comprising: receive a plurality of face images of a face at various orientations;', 'apply an extrinsic camera particle filter to determine a first plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of face images based at least in part on a first selected face model particle and a previous set of extrinsic camera information particles;', 'select a subset of the first plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of face images;', 'apply a face model particle filter to determine a first plurality of face model particles corresponding to the plurality of face images based at least in part on the selected subset of the first plurality of sets of extrinsic camera information particles and a previous set of face model particles;', 'select a second face model particle included in the first plurality of face model particles; and', 'determine a 3D face model based at least in part on the first plurality of face model particles; and', 'a memory coupled to the processor and configured to provide the processor with instructions., 'a processor configured to3. The system of claim 2 , wherein the first selected face model particle comprises a ...

Подробнее
12-02-2015 дата публикации

Depth calculation device, imaging apparatus, and depth calculation method

Номер: US20150043783A1
Автор: Keiichiro Ishihara
Принадлежит: Canon Inc

A depth calculation device for calculating depth information on an object from captured first image and second image with different blur, the depth calculation device comprising: an extraction unit configured to extract a first frequency component and a second frequency component from each of the first image and the second image, the first frequency component being a component of a first frequency band, the second frequency component being a component of a second frequency band, the second frequency band being lower than the first frequency band; and a depth calculation unit configured to calculate the depth information from the frequency components extracted by the extraction unit.

Подробнее
08-02-2018 дата публикации

METHOD FOR PERFORMING OUT-FOCUS USING DEPTH INFORMATION AND CAMERA USING THE SAME

Номер: US20180041748A1
Принадлежит:

A camera and a method for extracting depth information by the camera having a first lens and a second lens are provided. The method includes photographing, by the first lens, a first image; photographing, by the second lens, a second image of a same scene; down-sampling the first image to a resolution of the second image if the first image is an image having a higher resolution than a resolution of the second image; correcting the down-sampled first image to match the down-sampled first image to the second image; and extracting the depth information from the corrected down-sampled first image and the second image. 1. A method for performing out-focus of a camera having a first lens and a second lens , comprising:photographing a first image with the first lens and photographing a second image with the second lens of a same scene;extracting depth information of the photographed first image and the photographed second image using the photographed first image and the photographed second image; andperforming out-focus on the first image or the second image using the extracted depth information.2. The method as claimed in claim 1 , wherein the depth information includes information on depth of respective pixels of at least one of the first image and the second image.3. The method as claimed in claim 1 , wherein the first image is an image having a higher resolution than a resolution of the second image.4. The method as claimed in claim 3 , wherein extracting the depth information comprises: down-sampling the first image to a resolution of the second image; correcting the first down-sampled image to match the second image; and extracting a depth map from the first corrected image and the second image.5. The method as claimed in claim 4 , wherein extracting the depth information further comprises up-sampling the extracted depth map.6. The method as claimed in claim 1 , wherein the first image has a higher optical magnification than an optical magnification of the second ...

Подробнее
07-02-2019 дата публикации

Method and system of recurrent semantic segmentation for image processing

Номер: US20190043203A1
Принадлежит: Intel Corp

A system, article, and method of recurrent semantic segmentation for image processing by factoring historical semantic segmentation.

Подробнее
24-02-2022 дата публикации

AUGMENTED REALITY SYSTEM

Номер: US20220060481A1
Принадлежит:

A computer-implemented method for an augmented-reality system is provided. The computer-implemented method comprises obtaining sensed data, representing an environment in which the AR system is located, determining that the AR system is in a location associated with a first authority characteristic, and controlling access to the sensed data for one or more applications operating in the AR system. Each of the one or more applications is associated with a respective authority characteristic. Controlling access to the sensed data for a said application is performed in dependence on the first authority characteristic and a respective authority characteristic associated with the said application. An AR system comprising one or more sensors, storage for storing sensed data, one or more application modules, and one or more processors arranged to perform the computer-implemented method is provided. A non-transitory computer-readable storage medium comprising computer-readable instructions for performing the computer-implemented method is also provided. 1. A computer-implemented method for an augmented reality , AR , system , the method comprising:obtaining sensed data representing an environment in which an AR system is located;determining that the AR system is in a location associated with a first authority characteristic; andcontrolling access to the sensed data for one or more applications operating in the AR system,wherein each of the one or more applications is associated with a respective authority characteristic and controlling access to the sensed data for a said application is performed in dependence on the first authority characteristic and a respective authority characteristic associated with the said application.2. The computer-implemented method of claim 1 , wherein controlling access to the sensed data for the said application comprises:securing the sensed data within a secure environment in the AR system;if the respective authority characteristic associated ...

Подробнее
24-02-2022 дата публикации

ACTIVE GIMBAL STABILIZED AERIAL VISUAL-INERTIAL NAVIGATION SYSTEM

Номер: US20220060628A1
Автор: Rawal Naman
Принадлежит:

A vehicle navigation system can acquire a plurality of images with a camera; determine at least one feature in one or more image of the plurality of images; reduce, via image feature tracking, a rotational noise associated with a motion of the camera in the one or more images; determine one or more keyframes based on the one or more images with reduced rotational noise; determine an optical flow of one or more of the plurality of images based on the one or more keyframes; determine a predicted depth of the at least one feature based on the optical flow; determine a pose and a motion of the camera based on the optical flow and the predicted depth of the at least one feature; and determine a first pose and a first motion of the vehicle based on the determined pose and motion of the camera and gimbal encoder information. 1. A method of vehicle navigation , the method comprising:acquiring a plurality of images with a camera while a vehicle is operating, wherein the camera is mounted to a gimbal mounted to the vehicle;determining, using processing circuitry, at least one feature in one or more image of the plurality of images;tracking, via the gimbal, the at least one feature, wherein tracking the at least one feature comprises causing, by the processing circuitry, the gimbal to move the camera such that rotational noise associated with motion of the vehicle in one or more of the plurality of images is reduced;determining, using the processing circuitry, an optical flow of one or more of the plurality of images based on the one or more images having reduced rotational noise;determining, using the processing circuitry, a pose and a motion of the camera for each of the one or more images of the plurality of images based on the determined optical flow;determining, using the processing circuitry, a first pose and a first motion of the vehicle based on the determined pose and motion of the camera and gimbal encoder information; andcausing, using the processing circuitry, the ...

Подробнее
19-02-2015 дата публикации

System and method for focusing imaging devices

Номер: US20150049238A1
Принадлежит: Navigate Surgical Technologies Inc

A system and method for automatically focusing imaging devices on an imaging set employs at least one tracker and two or more tracking markers, each tracking marker having an identification means and a tracking pattern. The tracking markers are configured for attaching to the imaging devices and to corresponding subjects to be imaged. A tracker gathers image information of the imaging set and provides it to a controller, which compares the image information to predetermined stored information about the tracking patterns of the various tracking markers. The tracking markers are identified and their three-dimensional positions determined. The distances between the imaging devices and the subjects are determined and the distances between the imaging devices and the subjects are calculated. This provides the focus setting information for communication to the imaging devices. The tracking patterns may have no rotational symmetry, allowing the orientation of subjects to be determined.

Подробнее
06-02-2020 дата публикации

COMMUNICATION TERMINAL, COMMUNICATION SYSTEM, COMMUNICATION CONTROL METHOD, AND RECORDING MEDIUM

Номер: US20200043188A1
Автор: KATO Yoshinaga
Принадлежит:

A communication terminal communicably connected with a server, including circuitry to: obtain first image data of at least a part of a first object detected at a first point of time; transmit the first image data to the server to request verification of the first image data; obtain second image data of at least a part of a second object detected at a second point of time, the second point of time being a time later than the first point of time; calculate a distance between a first position indicating a position of the first object at the first point of time, and a second position indicating a position of the second object at the second point of time; and control not to transmit the second image data based on a determination indicating that the calculated distance is equal to or less than a threshold 1. A communication terminal communicably connected with a server , comprising circuitry configured to:obtain first image data of at least a part of a first object detected at a first point of time;transmit the first image data to the server to request verification of the first image data;obtain second image data of at least a part of a second object detected at a second point of time, the second point of time being a time later than the first point of time;calculate a distance between a first position indicating a position of the first object at the first point of time, and a second position indicating a position of the second object at the second point of time; andcontrol not to transmit the second image data based on a determination indicating that the calculated distance is equal to or less than a threshold.2. The communication terminal of claim 1 , whereinthe circuitry determines that the first object and the second object are a same object when the calculated distance is equal to or less than the threshold, and determines that the first object and the second object are different objects when the calculated distance is greater than the threshold.3. The communication ...

Подробнее
06-02-2020 дата публикации

INCIDENT SITE INVESTIGATION AND MANAGEMENT SUPPORT SYSTEM BASED ON UNMANNED AERIAL VEHICLES

Номер: US20200043229A1
Принадлежит:

Systems and methods allow for incident data collection and management system based on unmanned aerial vehicles (UAVs), that is, drones to help accelerate the data collection and analytics, information dissemination, and decision support at incident sites. The system architecture may include onsite, server, and offline components including flight planning subsystem, flight execution and mission control subsystem, information dissemination subsystem to travelers and traveler information services, the interface with traffic management center, and the data analytic, visualization, and training subsystems. Other embodiments include the video-based 3D incident site reconstruction methods, site positioning and scaling methods with pre-collected static background infrastructure data, data management and user charging methods, and training methods with the generated 3D model. 1. A method of controlling access of 3-D model data comprising: i) Tier 1 users including Public Safety Department, Transportation Safety/Management Agencies, and Incident Response Teams who can get access of of High-resolution 3-D models with full details for reporting, analysis, and training;', 'ii) Tier 2 users including Insurance/Medical Companies who can get access to a report and a 3-D model of detailed damaged vehicle parts and site infrastructures for damage-liability assessment;', 'iii) Tier 3 users including travelers involved in an incident who can get access to detailed damage reports, a 3D view of an incident site, and one or more images for insurance claims and legal disputes;', 'iv) Tier 4 users including other travelers, data analytic agency departments and consulting companies who can get access to anonymized, aggregated data and crash reports, per request/purchase;, 'a) defining different tiers of users to includeb) delivering data between the different tiers of users according to data limitations regarding accessible level of details; andc) determining a charged cost to the users ...

Подробнее
07-02-2019 дата публикации

VIRTUAL REALITY VIDEO PROCESSING

Номер: US20190045125A1
Принадлежит:

A method is disclosed, comprising providing video data representing a plurality of frames of virtual reality content captured by a camera. A further step comprises determining a reference depth or object within the content. A further step comprises adjusting the position of content in one or more frames, based on the reference depth or object, to compensate for the movement of the camera during capture. 1. A method comprising:providing video data representing a plurality of frames of virtual reality content captured by a camera;determining a reference depth within the virtual reality content; andadjusting position of the virtual reality content in one or more frames, based on the reference depth, to compensate for movement of the camera during capture.2. The method of claim 1 , wherein the adjusting the position of the virtual reality content comprises panning the one or more frames in substantially the same direction as the movement of the camera.3. The method of claim 2 , wherein the adjusting the position of the content further comprises panning the one or more frames in a direction substantially opposite to the movement of the camera claim 2 , wherein panning the one or more frames in substantially the same direction as the movement of the camera is based on the reference depth claim 2 , and wherein panning the one or more frames in a direction substantially opposite to the movement of the camera is based on an amount of the movement of the camera.3. A non-transitory computer-readable medium comprising stored thereon computer-readable code claim 2 , which claim 2 , when executed by at least one processor claim 2 , causes the at least one processor to perform:provide video data representing a plurality of frames of virtual reality content captured by a camera;determine a reference depth within the virtual reality content; andadjust position of the virtual reality content in one or more frames, based on the reference depth, to compensate for movement of the camera ...

Подробнее
18-02-2021 дата публикации

OPTICAL DISPLAY, IMAGE CAPTURING DEVICE AND METHODS WITH VARIABLE DEPTH OF FIELD

Номер: US20210051315A1
Принадлежит: EVERYSIGHT LTD.

An optical display, suitable for use in a wearable device such as a headset, comprises a pixelated illumination array and a fiber bundle, optionally in the form of a plate. The plate is formed of a plurality of parallel optical fibers extending in the direction of the thickness of the plate, the fibers are of at least two different lengths and are arranged in a spatially sequential pattern over at least a part of the plate, and the array is arranged to provide illumination out of the fibers. Different length fibers may be illuminated to present images at different distances from the user's eyes. The fiber bundle plate may also form part of an image capturing device and be used in the creation of a depth map for a captured image. 129-. (canceled)30. An optical projection system comprising:a pixelated illumination image source;at least one fiber bundle having backward ends coupled to the pixelated illumination image source and forward ends and;an optical arrangement comprising lenses and reflectors having a back focal plane which is located between the forward ends and a viewer,wherein the at least one fiber bundle is formed of a plurality of parallel optical fibers extending forward from the pixelated illumination image source,wherein the fibers are of at least two different lengths and are arranged in a spatially sequential pattern over at least a part of the fiber bundle, andwherein the pixelated illumination image source is arranged to provide illumination out of the fibers.31. The optical projection system of claim 30 ,wherein each fiber has a first and second end,wherein the fibers are arranged such that the first ends of the fibers are aligned is flat and the second ends of the fibers are not aligned as a result of said difference in length, andwherein the aligned ends of the fibers face towards the pixelated illumination image source.32. The optical projection system of claim 30 , wherein the fibers are arranged in at least two different spatially sequential ...

Подробнее
13-02-2020 дата публикации

SYSTEM AND METHOD OF PERSONALIZED NAVIGATION INSIDE A BUSINESS ENTERPRISE

Номер: US20200049509A1
Автор: Hill Edward L.
Принадлежит:

Systems and methods for tracking movement of individuals through a building receive, by one or more RF nodes disposed near an entrance to the building, RF signals from RF-transmitting mobile devices carried by persons near the entrance, capture an image of the persons while they are near the entrance, determine an identity and relative distance of each RF-transmitting mobile device from each RF node based on information associated with the RF signals received by that RF node, detect humans in the image, determine a relative depth of each human in the image, and assign the identity of each RF-transmitting mobile device to one of the humans detected in the image based on the relative distance of each RF-transmitting mobile device from each RF node and the relative depth of each human in the image, thereby identifying each individual who to be tracked optically as that individual moves throughout the building. 1. A system for tracking locations of individuals in a building , the system comprising:at least one radiofrequency (RF) node disposed near an entrance to the building, the at least one RF node having an RF receiver to receive RF signals from RF-transmitting devices near the entrance to the building;at least one optical device disposed near the entrance to the building, the at least one optical device capturing an image of a plurality of persons while the plurality of persons is near the entrance to the building; anda controller in communication with the at least one RF node to obtain therefrom information associated with the RF signals received by the RF receiver of that at least one RF node and in communication with the at least one optical device to obtain therefrom the captured image,the controller being configured to determine an identity of each RF-transmitting device and an angular position of that RF-transmitting device with respect to each RF node of the at least one RF node based on the information associated with the RF signals obtained by the ...

Подробнее
14-02-2019 дата публикации

METHOD OF IMAGE PROCESSING AND IMAGE PROCESSING DEVICE

Номер: US20190051016A1
Принадлежит:

A method of image processing is provided. The method may include: determining a candidate tuple from at least two images that are taken at different times, wherein the candidate tuples are determined using at least odometry sensor information. The couple of subsequent images have been detected by a moving image sensor moved by a vehicle. The odometry sensor information is detected by a sensor moved by the vehicle. The method may further include classifying the candidate tuples into a static tuple or a dynamic tuple. The static tuple represents a static object within the couple of subsequent images, and the dynamic tuple represents a moving object within the couple of subsequent images. 1. A method of image processing , the method comprising:determining a candidate tuple from at least two images that are taken at different times, wherein the candidate tuples are determined using at least odometry sensor information, and wherein the couple of subsequent images have been detected by a moving image sensor moved by a vehicle, wherein the odometry sensor information is detected by a sensor moved by the vehicle;classifying the candidate tuples into a static tuple or a dynamic tuple, wherein the static tuple represents a static object within the couple of subsequent images, and the dynamic tuple represents a moving object within the couple of subsequent images.2. The method of claim 1 ,wherein the candidate tuple is classified into a static tuple or a dynamic tuple by comparing a first angle of translation of the vehicle as estimated from the odometry sensors in the vehicle with a second angle of translation of the vehicle as estimated from the candidate tuples.3. The method of claim 1 , {'br': None, 'i': Y', 'T', 'T', 'Y', 'T', '+T', '*X', 'Y', '*X', 'T', 'T, 'sub': 1', 'cameraX', 'cameraY', '2', 'cameraY', 'cameraX', '1', '1', '2', 'cameraX', 'cameraY, '*(*cos(θ)−*sin(θ))−*()+(*cos(θ)+*sin(θ))=0'}, 'wherein a candidate tuple is a static tuple if the candidate tuple ...

Подробнее
25-02-2021 дата публикации

Method for processing image in virtual reality display device and related virtual reality display device

Номер: US20210056719A1

A method for processing an image in a virtual reality display device and a related virtual reality display device are provided. A gaze area and a non-gaze area are determined in a display area of a virtual reality display device. At a first time instant, a rendering process is performed in the gaze area and the non-gaze area to generate a first image. Based on attitude information of the virtual reality display device at a second time instant, a time warping process is performed on the first image to generate a second image. Based on the movement information and attribute information of the motion object in the gaze area, the second image is modified. With the modifying the second image, problems caused by the time warping process can be avoided, delays can be reduce, and smearing or ghosting can be avoided, thus a good display effect can be obtained.

Подробнее
13-02-2020 дата публикации

COLLECTING AND VIEWING THREE-DIMENSIONAL SCANNER DATA WITH USER DEFINED RESTRICTIONS

Номер: US20200051205A1
Принадлежит:

A method displays images of a scene with restrictions. The method includes measuring a first plurality of 3D coordinates and a second plurality of 3D coordinates with a 3D measuring instrument at a first position and a second position. The first plurality of 3D coordinates and the second plurality of 3D coordinates are registered together in a common frame of reference. A trajectory is defined within the scene that includes a plurality of trajectory points, the plurality of trajectory points including a first trajectory point and a second trajectory point. At least one restriction is defined at the first trajectory point. A plurality of 2D images are generated at each trajectory point, a first 2D image is associated with the first trajectory point. The first 2D image is changed based on the at least one restriction. The first 2D image is displayed on a display device. 1. A method of interactively displaying panoramic images of a scene , the method comprising:measuring a first plurality of 3D coordinates with a 3D measuring instrument at a first position;measuring a second plurality of 3D coordinates with the 3D measuring instrument at a second position different than the first position;registering the first plurality of 3D coordinates and the second plurality of 3D coordinates together in a common frame of reference;defining a trajectory within the scene, the trajectory including a plurality of trajectory points, the plurality of trajectory points including a first trajectory point and a second trajectory point;defining at least one restriction at the first trajectory point;generating along the trajectory a plurality of two-dimensional (2D) images at each trajectory point, the plurality of 2D images includes a first 2D image associated with the first trajectory point;changing the first 2D image based on the at least one restriction; anddisplaying the first 2D image on a display device of the first trajectory point in response to an input from a user.2. The method of ...

Подробнее
13-02-2020 дата публикации

Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale

Номер: US20200051267A1
Принадлежит:

Determining spatial coordinates of a 3D reconstruction includes obtaining, from a first camera system, a first image comprising a first real object, obtaining, from a second camera system, a second image comprising a second real object associated with known geometric properties, wherein the first camera system and the second camera system have a known spatial relationship, and determining a scale of the face based on the second image and the known geometric properties of the at least part of the second real object. Determining the spatial coordinates of the 3D reconstruction also includes determining a pose of the first and second camera systems, and determining the spatial coordinates based on the pose of the first camera system and the scale of the at least part of the second real object. 1. A method of determining spatial coordinates of a 3D reconstruction of at least part of a first real object comprising:obtaining, from a first camera system, a first image comprising at least part of a first real object;obtaining, from a second camera system, a second image comprising at least part of a second real object associated with known geometric properties, wherein the first camera system and the second camera system have a known spatial relationship;determining a scale of at least part of the face based on the second image and the known geometric properties of the at least part of the second real object;determining a pose of the second camera system based on the determined scale of the at least part of the second real object;determining a pose of the first camera system according to the pose of the second camera system and the known spatial relationship; anddetermining, based on the pose of the first camera system and the scale of the at least part of the second real object, spatial coordinates of a 3D reconstruction of the first real object.2. The method of claim 1 , wherein the second real object comprises at least part of a human face.3. The method of claim 2 , ...

Подробнее
23-02-2017 дата публикации

Virtual and augmented reality systems and methods

Номер: US20170053450A1
Принадлежит: Magic Leap Inc

A virtual or augmented reality display system that controls a display using control information included with the virtual or augmented reality imagery that is intended to be shown on the display. The control information can be used to specify one of multiple possible display depth planes. The control information can also specify pixel shifts within a given depth plane or between depth planes. The system can also enhance head pose measurements from a sensor by using gain factors which vary based upon the user's head pose position within a physiological range of movement.

Подробнее
10-03-2022 дата публикации

CAMERA SYSTEM WITH HIGH UPDATE RATE

Номер: US20220075064A1
Принадлежит: ZF FRIEDRICHSHAFEN AG

A device comprising a processor designed to execute a motion estimation based on intensity images (A+B, A+B) from a time-of-flight camera to generate motion vectors. 1. A device comprising:{'sub': Q', 'Q', 'I', 'I, 'a processor configured to execute a motion estimation based on intensity images (A+B, A+B) from a time-of-flight camera to generate motion vectors.'}2. The device according to claim 1 , wherein the processor is configured to reconstruct a depth image with compensation for movement based on phase images (A−B claim 1 , A−B) and the motion vectors.3. The device according to claim 2 , wherein the processor is configured to obtain distance information (d) from two corresponding pixels ((x claim 2 , y) claim 2 , (x′ claim 2 , y′)) from the phase data in the phase images (A−B claim 2 , A−B).5. The device according to claim 1 , wherein the intensity images (A+B claim 1 , A+B) are at least one of obtained from a sensor in a time-of-flight camera claim 1 , or calculated by combining raw images (A claim 1 , B claim 1 , A claim 1 , B).6. The device according to claim 1 , wherein the raw images (A claim 1 , B claim 1 , A claim 1 , B) comprise first raw images (A claim 1 , A) obtained with modulation signals (Φ claim 1 , Φ) and second raw images (B claim 1 , B) obtained with inverted modulation signals (Φ claim 1 , Φ).7. The device according to claim 1 , wherein the intensity images (A+B claim 1 , A+B) comprise one or more first intensity images (A+B) and second intensity images (A+B) claim 1 , wherein the first intensity images (A+B) and second intensity images (A+B) are obtained in different modulation periods.8. The device according to claim 1 , wherein the depth images are used to record the interior of a vehicle.9. The device according to claim 1 , wherein the processor is configured to generate a first depth image based on intensity images (A+B claim 1 , A+B claim 1 , A+B claim 1 , A+B) and phase images (A−B claim 1 , A−B claim 1 , A−B claim 1 , A−B) in a first ...

Подробнее
04-03-2021 дата публикации

METHOD FOR THE THREE DIMENSIONAL MEASUREMENT OF MOVING OBJECTS DURING A KNOWN MOVEMENT

Номер: US20210063144A1
Автор: Harendt Bastian
Принадлежит: Cognex Corporation

A 3D measurement method including: projecting a pattern sequence onto a moving object; capturing a first image sequence with a first camera and a second image sequence synchronously to the first image sequence with a second camera; determining corresponding image points in the two sequences; computing a trajectory of a potential object point from imaging parameters and from known movement data for each pair of image points that is to be checked for correspondence. The potential object point is imaged by both image points in case they correspond. Imaging object positions derived therefrom at each of the capture points in time into image planes respectively of the two cameras. Corresponding image points positions are determined as trajectories in the two cameras and the image points are compared with each other along predetermined image point trajectories and examined for correspondence; lastly performing 3D measurement of the moved object by triangulation. 16.-. (canceled)7. A method for three-dimensional measurement of a moving object , the method comprising:computing a trajectory of a potential object point of the moving object based on imaging parameters associated with a first image sequence of N patterns projected onto the moving object and a second image sequence of the N patterns and movement data for the moving object;determining, based on the trajectory of the potential object point and the imaging parameters associated with the first image sequence and the second image sequence, a first image point trajectory in the first image sequence and a second image point trajectory in the second image sequence;comparing image points along the first image point trajectory and image points along the second image point trajectory to determine that there is correspondence between the first image point trajectory and the second image point trajectory; andperforming three-dimensional measurement of the moving object based on the corresponding first and second image point ...

Подробнее
01-03-2018 дата публикации

METHOD OF MULTI-VIEW DEBLURRING FOR 3D SHAPE RECONSTRUCTION, RECORDING MEDIUM AND DEVICE FOR PERFORMING THE METHOD

Номер: US20180061018A1
Принадлежит:

A method of multi-view deblurring for 3-dimensional (3D) shape reconstruction includes: receiving images captured by multiple synchronized cameras at multiple viewpoints; performing iteratively estimation of depth map, latent image, and 3D motion at each viewpoint for the received images; determining whether image deblurring at each viewpoint is completed; and performing 3D reconstruction based on final depth maps and latent images at each viewpoint. Accordingly, it is possible to achieve accurate deblurring and 3D reconstruction even from any motion blurred images. 1. A method of multi-view deblurring for 3-dimensional (3D) shape reconstruction , comprising:receiving images captured by multiple synchronized cameras at multiple viewpoints;performing iteratively estimation of depth map, latent image, and 3D motion at each viewpoint, for the received images;determining whether image deblurring at each viewpoint is completed; andperforming 3D reconstruction based on final depth maps and latent images at each viewpoint.2. The method of multi-view deblurring for 3D shape reconstruction according to claim 1 , wherein the determining whether image deblurring at each viewpoint is completed comprises determining whether estimation of depth map claim 1 , latent image claim 1 , and 3D motion at each viewpoint is performed a preset number of times.3. The method of multi-view deblurring for 3D shape reconstruction according to claim 1 , wherein the performing iteratively estimation of depth map claim 1 , latent image claim 1 , and 3D motion at each viewpoint claim 1 , for the received images claim 1 , is based on one reference view from the multiple viewpoints claim 1 , for the depth map and the 3D motion.4. The method of multi-view deblurring for 3D shape reconstruction according to claim 3 , wherein the 3D motion is represented as a 3D vector.5. The method of multi-view deblurring for 3D shape reconstruction according to claim 1 , wherein the performing iteratively estimation ...

Подробнее
01-03-2018 дата публикации

SYSTEMS AND METHODS FOR SIMULATENOUS LOCALIZATION AND MAPPING

Номер: US20180061072A1
Принадлежит:

Various embodiments provide systems, methods, devices, and instructions for performing simultaneous localization and mapping (SLAM) that involve initializing a SLAM process using images from as few as two different poses of a camera within a physical environment. Some embodiments may achieve this by disregarding errors in matching corresponding features depicted in image frames captured by an image sensor of a mobile computing device, and by updating the SLAM process in a way that causes the minimization process to converge to global minima rather than fall into a local minimum. 1. A method comprising:continuously capturing, by an image sensor, new image frames of a physical environment and adding the new image frames to a set of captured image frames;continuously capturing, from an inertial measurement unit (IMU), IMU data in correspondence with the image frames captured, the captured IMU data comprising degrees of freedom (DOF) parameters of the image sensor;identifying, by one or more hardware processors, a first key image frame from the set of captured image frames;identifying, by the one or more hardware processors, first IMU data, from the captured IMU data, associated with the first key image frame;detecting, by the IMU, a movement of the image sensor from a first pose, in the physical environment, to a second pose in the physical environment; identifying, by the one or more hardware processors, a second key image frame from the set of captured image frames;', 'identifying, by the one or more hardware processors, second IMU data, from the captured IMU data, associated with the second key image frame;', 'performing, by one or more hardware processors, feature matching on at least the first and second key image frames to identify a set of matching three-dimensional (3D) features in the physical environment;', 'generating, by the one or more hardware processors, a filtered set of matching 3D features by filtering out at least one erroneous feature, from the set ...

Подробнее
01-03-2018 дата публикации

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS

Номер: US20180061139A1
Принадлежит:

A virtual or augmented reality display system that controls a display using control information included with the virtual or augmented reality imagery that is intended to be shown on the display. The control information can be used to specify one of multiple possible display depth planes. The control information can also specify pixel shifts within a given depth plane or between depth planes. The system can also enhance head pose measurements from a sensor by using gain factors which vary based upon the user's head pose position within a physiological range of movement. 1. A display system comprising:a display configured to display digital image data, wherein the image data comprises a plurality of frames, each frame comprising a plurality of depth planes; and receive image data, and', 'control the display based on control information embedded in the image data, wherein the embedded control information specifies an inactivation of one or more depth planes within the plurality of depth planes., 'a display controller configured to2. The display system of claim 1 , wherein the inactivation is a reduced power input to the display.3. The display system of claim 2 , herein the inactivation shuts off power to the display.4. The display system of claim 1 , wherein the display controller is further configured to order the one or more depth planes within the plurality of depth planes based on the control information embedded in the image data.5. A display system comprising:a display configured to display digital image data, wherein the image data comprises a plurality of frames, each frame comprising a plurality of depth planes and each depth plane comprising a plurality of color fields; and receive image data, and', 'control the display based on control information embedded in the image data, wherein the embedded control information specifies an inactivation of one or more color fields within one or more depth planes., 'a display controller configured to6. The display system ...

Подробнее
20-02-2020 дата публикации

SEMANTIC STRUCTURE FROM MOTION FOR ORCHARD RECONSTRUCTION

Номер: US20200058162A1
Принадлежит:

A method includes constructing a three-dimensional model of a front side of a row of trees based on a plurality of images of the front side of the row of trees and constructing a three-dimensional model of a back side of the row of trees based on a plurality of images of the back side of the row of trees. The three-dimensional model of the front side of the row of trees is merged with the three-dimensional model of the back side of the row of trees by linking a trunk in the three-dimensional model of the front side to a trunk in the three-dimensional model of the back side to form a merged three-dimensional model of the row of trees. The merged three-dimensional model of the row of trees is used to determine a physical attribute of the row of trees. 1. A method comprising:constructing a three-dimensional model of a front side of a row of trees based on a plurality of images of the front side of the row of trees;constructing a three-dimensional model of a back side of the row of trees based on a plurality of images of the back side of the row of trees;merging the three-dimensional model of the front side of the row of trees with the three-dimensional model of the back side of the row of trees by linking a trunk in the three-dimensional model of the front side to a trunk in the three-dimensional model of the back side to form a merged three-dimensional model of the row of trees; andusing the merged three-dimensional model of the row of trees to determine a physical attribute of the row of trees.2. The method of wherein merging the three-dimensional model of the front side of the row of trees with the three-dimensional model of the back side of the row of trees comprises:projecting the three-dimensional model of the front side of the row of trees onto a plane to form a first projection;projecting the three-dimensional model of the back side of the row of trees onto the plane to form a second projection;aligning the second projection with the first projection to ...

Подробнее
04-03-2021 дата публикации

METHOD AND SYSTEM FOR TRACKING MOTION OF SUBJECTS IN THREE DIMENSIONAL SCENE

Номер: US20210065377A1
Принадлежит: TATA CONSULTANCY SERVICES LIMITED

This disclosure relates generally to method and system for tracking motion of subjects in three dimensional space. The method includes receiving a video of the environment using a scene capturing device positioned in the environment. A motion intensity of subjects from the plurality of image frames are detected for segregating the motion of subjects present in each image frame from the plurality of image frames into a plurality of categories. Further, a three dimensional (3D) scene from the plurality of image frames are constructed using the multi focused view based depth calculation technique. The subjects are tracked based on the position in three dimensional (3D) scene categorized under the significant motion category. The proposed disclosure provides efficiency in tracking the new entry of subjects in the environment for adjusting the focus of observer. 1. A processor implemented method for tracking motion of subjects in three dimensional (3D) scene , wherein the method comprises:{'b': 202', '104, 'receiving (), via one or more hardware processors (), a video of the environment using a scene capturing device positioned in the environment, wherein the video comprises a plurality of image frames associated with a plurality of components of the environment and subjects present within the environment, wherein the subjects are non-stationary;'}{'b': 204', '104, 'detecting (), via the one or more hardware processors (), a motion intensity of subjects from the plurality of image frames, based on a change detected in a position of the subjects in the current image frame with reference to the previous image frame among the plurality of image frames;'}{'b': 206', '104, 'claim-text': a no motion category, if no motion is detected for the position of the subject present in each image frame;', 'a significant motion category, if the change detected for the position of the subject is above a predefined threshold, indicating entry or exit of the subject from the environment; ...

Подробнее
04-03-2021 дата публикации

PSEUDO RGB-D FOR SELF-IMPROVING MONOCULAR SLAM AND DEPTH PREDICTION

Номер: US20210065391A1
Принадлежит:

A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device. 1. A computer-implemented method executed on a processor for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) , the method comprising:capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera;feeding the RGB images into a depth estimation/refinement module;outputting depth maps;feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images;outputting camera poses and point clouds; andconstructing a 3D map of a surrounding environment displayed on a visualization device.2. The method of claim 1 , wherein common tracked keypoints from neighboring keyframes are employed.3. The method of claim 2 , wherein a symmetric depth transfer loss and a depth consistency loss are imposed.4. The method of claim 3 , wherein the symmetric depth transfer loss is given as:{'br': None, 'img': {'@id': 'CUSTOM-CHARACTER-00023', '@he': '3.22mm', '@wi': '5.67mm', '@file': 'US20210065391A1-20210304-P00020.TIF', '@alt': 'custom-character', '@img-content': 'character', '@img-format': 'tif'}, 'i': w', 'd', 'w', 'd', 'w', 'd', 'w', 'd', 'w, 'sub': c→k1', 'k1', 'k1→c', 'c, 'sup': i', 'i', 'i', 'i, ...

Подробнее
04-03-2021 дата публикации

Display of item information in current space

Номер: US20210065433A1
Принадлежит: Ke com Beijing Technology Co Ltd

Provided are a method and an apparatus for displaying item information in a current space, an electronic device, and a non-transitory machine-readable storage medium. The method comprises: obtaining spatial data of a current position in a current space, and obtaining position data and information data of at least one item in the current space according to the spatial data; calculating a display priority of the at least one item in the current space according to the spatial data, the position data, and the information data; and displaying the information data of the at least one item according to the display priority. Through the method, the display priority of the information data of the at least one item in the current space are calculated, and then the information data of the at least one item in the current space is displayed according to the display priority, so that it is convenient for a user to directly view the information data of the at least one item in the current space.

Подробнее
04-03-2021 дата публикации

3D ACTIVE DEPTH SENSING WITH LASER PULSE TRAIN BURSTS AND A GATED SENSOR

Номер: US20210067662A1
Принадлежит:

This disclosure provides systems, methods, and apparatuses for sensing a scene. In one aspect, a device may illuminate the scene using a sequence of two or more periods. Each period may include a transmission portion during which a plurality of light pulses are emitted onto the scene. Each period may include a non-transmission portion corresponding to an absence of emitted light. The device may receive, during each transmission portion, a plurality of light pulses reflected from the scene. The device may continuously accumulate photoelectric charge indicative of the received light pulses during an entirety of the sequence. The device may transfer the accumulated photoelectric charge to a readout circuit after an end of the sequence. 1. A method for sensing a scene , comprising:illuminating the scene using a sequence of two or more periods, each period including a transmission portion during which a plurality of light pulses are emitted onto the scene and including a non-transmission portion corresponding to an absence of emitted light;receiving, during each transmission portion, a plurality of light pulses reflected from the scene; andcontinuously accumulating photoelectric charge indicative of the received light pulses during an entirety of the sequence.2. The method of claim 1 , further comprising:transferring the accumulated photoelectric charge to a readout circuit after an end of the sequence.3. The method of claim 1 , wherein the receiving comprises:configuring a photodiode to continuously receive photons during each transmission portion of the sequence.4. The method of claim 1 , further comprising:preventing reception of ambient light during each non-transmission portion of the sequence.5. The method of claim 4 , wherein the preventing comprises:disabling the accumulation of photoelectric charge during each non-transmission portion of the sequence.6. The method of claim 1 , wherein each of the plurality of emitted light pulses is generated by a single-mode ...

Подробнее
04-03-2021 дата публикации

EFFECTS FOR 3D DATA IN A MESSAGING SYSTEM

Номер: US20210067756A1
Принадлежит:

The subject technology selects a set of augmented reality content generators from a plurality of available augmented reality content generator based on metadata associated with each respective augmented reality content generator. The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology captures image data and depth data using at least one camera of the client device. The subject technology applies, to the image data and the depth data, the 3D effect based at least in part on the augmented reality content generator 1. A method , comprising:selecting a set of augmented reality content generators from a plurality of available augmented reality content generator based on metadata associated with each respective augmented reality content generator, the metadata including information indicating a corresponding augmented reality content generator includes at least a 3D effect, the set of augmented reality content generators including at least one augmented reality content generator without a 3D effect and at least one augmented reality content generator with a 3D effect;receiving, at a client device; a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect;capturing image data and depth data using at least one camera of the client device; andapplying, to the image data and the depth data, the 3D effect based at least in part on the augmented reality content generator.2. The method of ; further comprising:generating a 3D message based at least in part on the applied 3D effect; andrendering a view of the 3D message based at least in part on the applied 3D effect.3. The method of claim 1 , wherein the at least one camera comprises a ...

Подробнее
04-03-2021 дата публикации

DUAL LENS IMAGING MODULE AND CAPTURING METHOD THEREOF

Номер: US20210067761A1
Принадлежит: HTC CORPORATION

A dual lens imaging module suitable for an electronic device is provided. The dual lens imaging module includes a first lens, a second lens, and a moving module. The moving module is connected to the second lens and is adapted to move or tilt the second lens, wherein the first lens is a lens having an autofocus function, and a working distance of the dual lens imaging module is adapted to be changed according to a spacing of the first lens and the second lens. 1. A dual lens imaging module suitable for an electronic device , comprising:a first lens;a second lens; anda moving module connected to the second lens and adapted to move or tilt the second lens, wherein the first lens is a lens having an autofocus function, and a working distance of the dual lens imaging module is adapted to be changed according to a spacing of the first lens and the second lens.2. The dual lens imaging module of claim 1 , further comprising:a transparent substrate disposed to cover the first lens and the second lens.3. The dual lens imaging module of claim 2 , wherein a surface of the transparent substrate is aligned with a surface of an exterior of the electronic device.4. The dual lens imaging module of claim 2 , further comprising:an optical element detachably disposed on the transparent substrate, and the second lens is adapted to be moved to an effective optical path of the optical element via the moving module.5. The dual lens imaging module of claim 4 , wherein the optical element is at least one lens having a refractive power claim 4 , a neutral grayscale filter claim 4 , a color filter claim 4 , or a polarizer.6. The dual lens imaging module of claim 1 , wherein the moving module comprises a driving element and a mounting element claim 1 , the driving element is connected to the mounting element claim 1 , the second lens is disposed on the mounting element claim 1 , and the driving element is adapted to drive the mounting element to move or tilt the second lens.7. The dual lens ...

Подробнее
17-03-2022 дата публикации

DETECTOR FOR DETERMINING A POSITION OF AT LEAST ONE OBJECT

Номер: US20220084236A1
Принадлежит:

Described herein is a detector for determining a position of an object. The detector includes a sensor element having a matrix of optical sensors, each designed to generate a sensor signal in response to an illumination of its light-sensitive area by a light beam propagating from the object to the detector. The detector also includes an evaluation device configured to select a region of interest of the matrix, respectively determine a sensor signal of at least two optical sensors of the region of interest, and determine a longitudinal coordinate zof the object by evaluating a combined sensor signal Q. The evaluation device is also configured to determine an image of the region of interest from the sensor signals, determine a longitudinal coordinate zof the object by optimizing at least one blurring function f, and determine combined distance information z considering the longitudinal coordinates zand z. 1110112110. A detector () for determining a position of at least one object () , the detector () comprising:{'b': ['114', '116', '118', '118', '118', '112', '110'], '#text': 'at least one sensor element () having a matrix () of optical sensors (), the optical sensors () each having a light-sensitive area, wherein each optical sensor () is designed to generate at least one sensor signal in response to an illumination of its respective light-sensitive area by a light beam propagating from the object () to the detector (), and'}{'b': ['128', '128', '116', '128', '118', '128'], 'sub': 'DPR', '#text': 'at least one evaluation device (), wherein the evaluation device () is configured for selecting at least one region of interest of the matrix (), wherein the evaluation device () is configured for respectively determining at least one sensor signal of at least two optical sensors () of the region of interest, wherein the evaluation device () is configured for determining at least one longitudinal coordinate zof the object by evaluating a combined signal Q from the sensor ...

Подробнее
17-03-2022 дата публикации

Information processing apparatus, information processing method, and program

Номер: US20220084244A1
Принадлежит: Sony Group Corp

Provided are a position detection unit configured to detect first position information of a first imaging device and a second imaging device on the basis of a physical characteristic point of a subject imaged by the first imaging device and a physical characteristic point of a subject imaged by a second imaging device, and a position estimation unit configured to estimate a moving amount of the first imaging device and estimate second position information. The physical characteristic point is detected from a joint of the subject. The subject is a person. The present technology can be applied to an information processing apparatus that detects positions of a plurality of imaging devices.

Подробнее
09-03-2017 дата публикации

Surveying system

Номер: US20170067739A1
Принадлежит: HEXAGON TECHNOLOGY CENTER GMBH

A system is disclosed that comprises a camera module and a control and evaluation unit. The camera module is designed to be attached to the surveying pole and comprises at least one camera for capturing images. The control and evaluation unit has stored a program with program code so as to control and execute a functionality in which a series of images of the surrounding is captured with the at least one camera; a SLAM-evaluation with a defined algorithm using the series of images is performed, wherein a reference point field is built up and poses for the captured images are determined; and, based on the determined poses, a point cloud comprising 3D-positions of points of the surrounding can be computed by forward intersection using the series of images, particularly by using dense matching algorithm.

Подробнее
28-02-2019 дата публикации

METHOD AND IMAGE-PROCESSING DEVICE FOR DETERMINING A GEOMETRIC MEASUREMENT QUANITTY OF AN OBJECT

Номер: US20190066321A1
Принадлежит: TESTO SE & CO. KGAA

The invention relates to a method and to an image recording device () for determining a geometric measurement quantity () of an object (), wherein at least one visual image () of the object () is recorded in a recording step, a true-to-scale 3-D image of the object () is recorded and/or calculated in a 3-D image creation step, a subset of 3-D points of the 3-D image is calculated in a point cloud calculation step, a geometric primitive is fit to the subset of 3-D points using a computer in a fitting step, a feature selection is applied to the at least one visual image in a feature detection step in order to identify at least two feature points () in the visual image (), the at least two feature points () are projected onto the geometric primitive as at least two measurement points () in a projection step (), and a geometric measurement quantity () is calculated for the at least two measurement points () in a calculation step. 132. A method for determining a geometric measured variable () of an object () , the method comprising:{'b': 4', '2, 'recording at least one visual image () of the object () in a recording step,'}{'b': 5', '2, 'at least one of recording or calculating a 3D image () of the object (), true to scale, using a computer in a 3D image creation step,'}{'b': 8', '6', '5, 'calculating a subset () of 3D points () of the 3D image () with the computer in a point cloud calculation step,'}{'b': 7', '8', '6, 'fitting a geometric primitive () with the computer for the subset () of 3D points () in a fitting step,'}{'b': 4', '9', '10', '4, 'applying a feature selection to the at least one visual image () in a feature detection step in order to identify at least two feature points (, ) in the visual image (),'}{'b': 9', '10', '7', '12', '13, 'projecting the at least two feature points (, ) with the computer onto the geometric primitive () as at least two measurement points (, ) in a projection step, and'}{'b': 3', '12', '13, 'calculating a geometric measured ...

Подробнее
28-02-2019 дата публикации

SYSTEM AND METHOD FOR CENTIMETER PRECISION LOCALIZATION USING CAMERA-BASED SUBMAP AND LIDAR-BASED GLOBAL MAP

Номер: US20190066329A1
Автор: Luo Yi, WANG Yi, Xu Ke
Принадлежит:

A method of localization for a non-transitory computer readable storage medium storing one or more programs is disclosed. The one or more programs comprise instructions, which when executed by a computing device, cause the computing device to perform by one or more autonomous vehicle driving modules execution of processing of images from a camera and data from a LiDAR using the following steps comprising: constructing a 3D submap and a global map; extracting features from the 3D submap and the global map; matching features extracted from the 3D submap against features extracted from the global map; refining feature correspondence; and refining location of the 3D submap. 1. A method of localization for a non-transitory computer readable storage medium storing one or more programs , the one or more programs comprising instructions , which when executed by a computing device , cause the computing device to perform by one or more autonomous vehicle driving modules execution of processing of images from a camera and data from a LiDAR using the following steps comprising:constructing a 3D submap based on the images from the camera;constructing a global map based on the data from the LiDAR, wherein the camera and the LiDAR share a common field-of-view;extracting features from the 3D submap and the global map;matching features extracted from the 3D submap against features extracted from the global map;refining feature correspondence; andrefining location of the 3D submap.2. The method according to claim 1 , before constructing the 3D submap and the global map claim 1 , further comprising:performing data alignment; andcollecting data in an environment by using sensors including a camera, a LiDAR and an inertial navigation module.3. The method according to claim 2 , wherein constructing the 3D submap comprises:obtaining the images from the camera; andconstructing the 3D submap based on the images, using visual SLAM.4. The method according to claim 1 , wherein constructing the ...

Подробнее
28-02-2019 дата публикации

FEATURE EXTRACTION FROM 3D SUBMAP AND GLOBAL MAP SYSTEM AND METHOD FOR CENTIMETER PRECISION LOCALIZATION USING CAMERA-BASED SUBMAP AND LIDAR-BASED GLOBAL MAP

Номер: US20190066330A1
Автор: Luo Yi, WANG Yi, Xu Ke
Принадлежит:

A method of localization for a non-transitory computer readable storage medium storing one or more programs is disclosed. The one or more programs comprise instructions, which when executed by a computing device, cause the computing device to perform utilizing one or more autonomous vehicle driving modules that execute processing of images from a camera and data from a LiDAR the following steps comprising: aligning a 3D submap with a global map; extracting features from the 3D submap and the global map; classifying the extracted features in classes; and establishing correspondence of features in a same class between the 3D submap and the global map. 1. A method of localization for a non-transitory computer readable storage medium storing one or more programs , the one or more programs comprising instructions , which when executed by a computing device , cause the computing device to perform by one or more autonomous vehicle driving modules execution of processing of images from a camera and data from a LiDAR using the following steps comprising:constructing a 3D submap based on the images from the camera;constructing a global map based on the data from the LiDAR, wherein the camera and the LiDAR are with a same vehicle;aligning the 3D submap with the global map;extracting features from the 3D submap and the global map;classifying the extracted features in classes; andestablishing correspondence of features in a same class between the 3D submap and the global map.2. (canceled)3. The method according to claim 1 , wherein constructing the 3D submap comprises:obtaining the images from the camera; andconstructing the 3D submap based on the images, using visual SLAM.4. The method according to claim 1 , wherein constructing a global map comprises:obtaining the data from the LiDAR; andconstructing a city-scale 3D map based on the data from the LiDAR, using LiDAR mapping.5. The method according to claim 1 , wherein aligning the 3D submap with the global map further comprises ...

Подробнее
10-03-2016 дата публикации

Method for Mapping an Environment

Номер: US20160071278A1

A method for mapping an environment comprises moving a sensor along a path from a start location (P 0 ) through the environment, the sensor generating a sequence of images, each image associated with a respective estimated sensor location and comprising a point cloud having a plurality of vertices, each vertex comprising an (x,y,z)-tuple and image information for the tuple. The sequence of estimated sensor locations is sampled to provide a pose graph (P) comprising a linked sequence of nodes, each corresponding to a respective estimated sensor location. For each node of the pose graph (P), a respective cloud slice (C) comprising at least of portion of the point cloud for the sampled sensor location is acquired. A drift between an actual sensor location (P i+1 ) and an estimated sensor location (P i ) on the path is determined. A corrected pose graph (P′) indicating a required transformation for each node of the pose graph (P) between the actual sensor location (P i+1 ) and the start location (P 0 ) to compensate for the determined drift is provided. The sequence of estimated sensor locations is sampled to provide a deformation graph (N) comprising a linked sequence of nodes, each corresponding to respective estimated sensor locations along the path. For at least a plurality of the vertices in the cloud slices, a closest set of K deformation graph nodes is identified and a respective blending function based on the respective distances of the identified graph nodes to a vertex is determined. Transformation coefficients for each node of the deformation graph are determined as a function of the 20 required transformation for each node of the pose graph (P) to compensate for the determined drift. Tuple coordinates for a vertex are transformed to compensate for sensor drift as a function of the blending function and the transformation coefficients for the K deformation graph nodes closest to the vertex.

Подробнее
28-02-2019 дата публикации

METHODS AND SYSTEMS FOR A GESTURE-CONTROLLED LOTTERY TERMINAL

Номер: US20190066434A1
Автор: Alexopoulos Ilias
Принадлежит:

A method including providing a lottery terminal that includes a graphical user interface and a motion capture device to facilitate a user to play in a lottery. The method further includes displaying, on the graphical user interface, a first image with content that includes drawing lottery tickets, instant lottery tickets, lottery games, dynamically-generated animations, and advertisements and detecting, by the motion capture device, a gesture of the user in a three dimensional space surrounding the lottery terminal. The method may further includes displaying, on the graphical user interface, a second image with content based, at least in part, on the gesture of the user; receiving, by the lottery terminal, a payment from the user for playing the lottery; and distributing, by the lottery terminal, a lottery ticket, based, at least in part, on the gesture of the user. 1. A method , comprising:providing a lottery terminal comprising a graphical user interface and a motion capture device so as to facilitate a user to play in a lottery;displaying, by the lottery terminal on the graphical user interface, at least one first image, wherein a first content of the at least one first image is selected from the group consisting of drawing lottery tickets, instant lottery tickets, lottery games, dynamically-generated animations, and advertisements;detecting, by the motion capture device, at least one first gesture of the user in a three dimensional space surrounding the lottery terminal;displaying, by the lottery terminal on the graphical user interface, at least one second image, wherein a second content of the at least second image is based, at least in part, on the at least one first gesture of the user;receiving, by the lottery terminal, at least one payment from the user for playing the lottery; anddistributing, by the lottery terminal, at least one lottery ticket, based, at least in part, on the at least one first gesture of the user.2. The method of claim 1 , wherein the ...

Подробнее
12-03-2015 дата публикации

Stereoscopic endoscope device

Номер: US20150073209A1
Автор: Hiromu Ikeda
Принадлежит: Olympus Corp

Provided is a stereoscopic endoscope device including a single objective lens that collects light from a subject and forms an image of the light; a light splitting section that splits the light collected by the objective lens; image-capturing devices that capture optical images of the subject at imaging positions of the split beams of the light; focal-position adjusting sections that give optical path lengths different from each other to the split beams of the light; a calculation section that calculates an object distance between each point on the subject and the objective lens from 2D images acquired by the image-capturing devices; and a parallax-image generating section that generates a plurality of viewpoint-images of the subject when observed from a plurality of viewpoints, by using the calculated object distance.

Подробнее
27-02-2020 дата публикации

CONTEXT-AWARE HAZARD DETECTION USING WORLD-FACING CAMERAS IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS

Номер: US20200065584A1
Принадлежит: Dell Products LP

Systems and methods for providing context-aware hazard detection using world-facing cameras in virtual, augmented, and mixed reality (xR) applications are described herein. In some embodiments, an Information Handling System (IHS) may include: a processor; and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: receive an image during execution of a xR application displayed to a user wearing a Head-Mounted Display (HMD) coupled to the IHS; detect an object in the image; associate the object with a landmark selected among a plurality of landmarks usable by the xR application to determine a position of the HMD; and provide to the user, via the HMD, a safety instruction related to the object in response to a distance between the HMD and the selected landmark meeting a distance threshold.

Подробнее
27-02-2020 дата публикации

DRIVER STATE ESTIMATION DEVICE AND DRIVER STATE ESTIMATION METHOD

Номер: US20200065595A1
Автор: HYUGA Tadashi, Suwa Masaki
Принадлежит: Omron Corporation

A driver state estimation device which can estimate a distance to a head position of a driver without detecting a center position of a face area of the driver in an image, comprises a monocular camera which can pick up an image of a driver sitting in a driver's seat, a storage section and a CPU , the storage section comprising an image storing part for storing the image picked up by the monocular camera , and the CPU comprising a head detecting section for detecting a head of the driver in the image read from the image storing part , a defocus amount detecting section for detecting a defocus amount of the head of the driver in the image detected by the head detecting section and a distance estimating section for estimating a distance from the head of the driver sitting in the driver's seat to the monocular camera with use of the defocus amount detected by the defocus amount detecting section 1. A driver state estimation device for estimating a state of a driver using a picked-up image , comprising:an imaging section which can pick up an image of a driver sitting in a driver's seat; andat least one hardware processor,the at least one hardware processor comprisinga head detecting section for detecting a head of the driver in the image picked up by the imaging section,a defocus amount detecting section for detecting a defocus amount of the head of the driver in the image detected by the head detecting section, anda distance estimating section for estimating a distance from the head of the driver sitting in the driver's seat to the imaging section with use of the defocus amount detected by the defocus amount detecting section, whereinthe distance estimating section estimates the distance from the head of the driver sitting in the driver's seat to the imaging section in consideration of changes in size of a face area of the driver detected in a plurality of images picked up by the imaging section.2. The driver state estimation device according to claim 1 , comprising:a ...

Подробнее
27-02-2020 дата публикации

RENDERING VIRTUAL OBJECTS WITH REALISTIC SURFACE PROPERTIES THAT MATCH THE ENVIRONMENT

Номер: US20200066025A1
Принадлежит:

In one implementation, a method is disclosed for providing visual coherency between virtual objects and a physical environment. The method includes obtaining, at an electronic device, first content depicting a physical surface in the physical environment using an image sensor of the electronic device. An extrinsic property exhibited by the physical surface is determined based on the first content using a visual coherency model. Second content representing a virtual object is generated based on the extrinsic property to present on a display. 1. A method of providing visual coherency between virtual objects and a physical environment , the method comprising:at an electronic device with an image sensor:obtaining first content depicting a physical surface in the physical environment using the image sensor;determining an extrinsic property exhibited by the physical surface based on the first content using a visual coherency model; andgenerating second content representing a virtual object based on the extrinsic property to present on a display.2. The method of claim 1 , wherein the extrinsic property is dusty claim 1 , wet claim 1 , icy claim 1 , rusty claim 1 , dirty claim 1 , or worn.3. The method of claim 1 , wherein the visual coherency model is trained to determine the extrinsic property using a set of training images that depict variations of the extrinsic property.4. The method of claim 1 , wherein a value for the extrinsic property is determined based on the first content using the visual coherency model.5. The method of claim 4 , wherein the second content representing the virtual object is generated based on the value for the extrinsic property.6. The method of claim 1 , wherein the visual coherency model is trained to determine values for the extrinsic property using a set of training images that depict variations of the extrinsic property.7. The method of claim 4 , wherein the value is a probability estimate that the physical surface exhibits the extrinsic ...

Подробнее
27-02-2020 дата публикации

PIXEL CIRCUIT AND METHOD OF OPERATING THE SAME IN AN ALWAYS-ON MODE

Номер: US20200066779A1
Автор: Dutton Neale
Принадлежит:

An embodiment method of operating an imaging device including a sensor array including a plurality of pixels, includes: capturing a first low-spatial resolution frame using a subset of the plurality of pixels of the sensor array; generating, using a processor coupled to the sensor array, a first depth map using raw pixel values of the first low-spatial resolution frame; capturing a second low-spatial resolution frame using the subset of the plurality of pixels of the sensor array; generating, using the processor, a second depth map using raw pixel values of the second low-spatial resolution frame; and determining whether an object has moved in a field of view of the imaging device based on a comparison of the first depth map to the second depth map. 1. A method of operating an imaging device comprising a sensor array , the sensor array comprising a plurality of pixels , the method comprising:capturing a first low-spatial resolution frame using a subset of the plurality of pixels of the sensor array;generating, using a processor coupled to the sensor array, a first depth map using raw pixel values of the first low-spatial resolution frame;capturing a second low-spatial resolution frame using the subset of the plurality of pixels of the sensor array;generating, using the processor, a second depth map using raw pixel values of the second low-spatial resolution frame; anddetermining whether an object has moved in a field of view of the imaging device based on a comparison of the first depth map to the second depth map.2. The method of claim 1 , further comprising:activating each of the plurality of pixels of the sensor array in response to a determination that the object has moved in the field of view of the imaging device;capturing a high-spatial resolution frame using each of the plurality of pixels of the sensor array; andgenerating a third depth map using raw pixel values of the high-spatial resolution frame.3. The method of claim 2 , further comprising determining ...

Подробнее
11-03-2021 дата публикации

METHOD OF PERFORMING SIMULTANEOUS LOCALIZATION AND MAPPING WITH RESPECT TO A SALIENT OBJECT IN AN IMAGE

Номер: US20210073570A1
Принадлежит: LG ELECTRONICS INC.

The present disclosure relates to a method for performing simultaneous localization and mapping (SLAM) with respect to a salient object in an image, a robot and a cloud server for implementing such method. According to an embodiment of the present disclosure, a robot includes a camera sensor configured to capture one or more images for the robot to perform the SLAM with respect to a salient object for estimating a location of the robot within the space, a map storage configured to store the information for the robot to perform the SLAM, and a controller that is configured to: detect an object from the captured image; select, as a specific salient object for identifying the space, the detected object verified as corresponding to the specific salient object; and store, in the map storage, the selected specific salient object and coordinate information related to the selected specific salient object. 1. A robot comprising:a motor configured to cause the robot to move within a space;a camera sensor configured to capture one or more images for the robot to perform simultaneous localization and mapping (SLAM) with respect to a salient object for estimating a location of the robot within the space;a map storage configured to store information for the robot to perform the SLAM; and detect an object from the captured image;', 'select, as a specific salient object for identifying the space, the detected object verified as corresponding to the specific salient object; and', 'store, in the map storage, the selected specific salient object and coordinate information related to the selected specific salient object., 'a controller configured to2. The robot of claim 1 , wherein the controller is further configured to input a specific object from the captured image as a search query for performing a position estimation of the robot claim 1 , wherein the specific object is inputted based at least in part on the stored selected specific salient object from the map storage.3. The robot ...

Подробнее
11-03-2021 дата публикации

Method for applying bokeh effect to image and recording medium

Номер: US20210073953A1
Автор: Young Su Lee
Принадлежит: Nalbi Inc

A method for applying a bokeh effect on an image at a user terminal is provided. The method for applying a bokeh effect may include: receiving an image and inputting the received image to an input layer of a first artificial neural network model to generate a depth map indicating depth information of pixels in the image; and applying the bokeh effect on the pixels in the image based on the depth map indicating the depth information of the pixels in the image. The first artificial neural network model may be generated by receiving a plurality of reference images to the input layer and performing machine learning to infer the depth information included in the plurality of reference image.

Подробнее