Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 918. Отображено 100.
31-05-2012 дата публикации

Portrait Image Synthesis from Multiple Images Captured on a Handheld Device

Номер: US20120133746A1
Принадлежит: DigitalOptics Corp Europe Ltd

A hand-held digital image capture device (digital camera) has a user-selectable mode in which upon engaging the mode the device detects a face in the field of view of the device and generates a face delimiter on a camera display screen, the delimiter surrounding the initial position of the image of a the face on the screen. The device is arranged to indicate thereafter to the user if the device departs from movement along a predetermined concave path P with the optical axis of the device pointing towards the face, such indication being made by movement of the image of the face relative to the delimiter. The camera captures and stores a plurality of images at successive positions along the concave path.

Подробнее
28-06-2012 дата публикации

Imaging apparatus, imaging method, and computer readable storage medium

Номер: US20120163659A1
Автор: Mika Muto, Yasuo Asakura
Принадлежит: Individual

An imaging apparatus includes an imaging unit that generates a pair of pieces of image data mutually having a parallax by capturing a subject, an image processing unit that performs special effect processing, which is capable of producing a visual effect by combining a plurality of pieces of image processing, on a pair of images corresponding to the pair of pieces of image data, and a region setting unit that sets a region where the image processing unit performs the special effect processing on the pair of images.

Подробнее
05-07-2012 дата публикации

Image processing method and apparatus

Номер: US20120169844A1
Принадлежит: Individual

An image processing method for creating a disparity image for 3D display from a 2D video image includes detecting based on a first image of the 2D video image in a time and a second image in a time different from the first image, motion vectors between the first and second images for each block of the first image, detecting from the motion vectors, a most backward vector of a portion whose depth is on the back side, calculating differential vectors between each motion vector and the most backward vector and giving a depth on a close to the block of the first image corresponding to the motion vector having the larger differential vector and creating one or more disparity images from the first image and the depth.

Подробнее
19-07-2012 дата публикации

Image processing apparatus and method, and program

Номер: US20120182400A1
Принадлежит: Sony Corp

The present invention relates to an image processing apparatus and method, and a program capable of displaying a stereoscopic image having a more appropriate parallax. An image capture apparatus 11 captures a plurality of photographic images P( 1 ) to P(N) in a state of being turned around a center of turn C 11 . In response to an instruction for displaying an image in which a specific region in an area to be captured is displayed, the image capture apparatus 11 selects two photographic images between which parallax having a predetermined magnitude occurs in a subject in the specific region from among photographic images in which the specific region is displayed, and crops regions in which the subject in the specific region is displayed from these photographic images to produce right-eye and left-eye sub-images. These sub-images have an appropriate parallax and therefore are displayed simultaneously using a lenticular method or the like. Thus, a stereoscopic image with depth can be displayed. The present invention can be applied to a camera.

Подробнее
22-11-2012 дата публикации

Image processing system, image processing method, and program

Номер: US20120293693A1
Автор: Hironori Sumitomo
Принадлежит: KONICA MINOLTA INC

An objective of the present invention is to provide a technique capable of generating a virtual viewpoint image without causing any visually uncomfortable feeling. In order to achieve this objective, a first image obtained by being captured from a first viewpoint at a first image capture time, and a second image obtained by being captured at a second image capture time different from the first image capture time are acquired. To each of pixels in a non-image capture area corresponding to a portion of a subject not captured in the first image of a first virtual viewpoint image that is generated in a pseudo manner based upon the first image and can be acquired by being captured from a first virtual viewpoint different from the first viewpoint, a pixel value is added in accordance with the second image.

Подробнее
18-04-2013 дата публикации

Stereoscopic image capture device and control method of the same

Номер: US20130093847A1
Принадлежит: Fujifilm Corp

An image for a left eye and an image for a right eye are respectively captured. A corresponding point corresponding to each of a plurality of feature points detected from the image for a left eye is searched for using the image for a right eye. The number of corresponding points found by the search is counted, and it is determined whether to be equal to or more than a predetermined ratio with respect to the number of pixels of the image for a right eye. When the number of corresponding points is determined to be less than the predetermined ratio, the image for a right eye is recaptured, and the corresponding point search process and the determination of whether the number of corresponding points is equal to or more than the predetermined ratio are re-executed using an image new for a right eye obtained by recapture.

Подробнее
23-05-2013 дата публикации

Method for stabilizing a digital video

Номер: US20130127993A1
Автор: Sen Wang
Принадлежит: Apple Inc

A method for stabilizing an input digital video. Input camera positions are determined for each of the input video frames, and an input camera path is determined representing input camera position as a function of time. A smoothing operation is applied to the input camera path to determine a smoothed camera path, and a corresponding sequence of smoothed camera positions. A stabilized video frame is determined corresponding to each of the smoothed camera positions by: selecting an input video frame having a camera position near to the smoothed camera position; warping the selected input video frame responsive to the input camera position; warping a set of complementary video frames captured from different camera positions than the selected input video frame; and combining the warped input video frame and the warped complementary video frames to form the stabilized video frame.

Подробнее
08-08-2013 дата публикации

Product imaging device, product imaging method, image conversion device, image processing device, image processing system, program, and information recording medium

Номер: US20130202154A1
Автор: Hiromi Hirano
Принадлежит: RAKUTEN INC

A product imaging device ( 121 ) is provided which facilitates a user to capture the image sequence of entire surroundings of a product. An image sensor unit ( 201 ) senses incident light from the external world where the product is disposed and outputs an image representing a result of the sensing. An instruction receiving unit ( 202 ) receives an image-capture instruction from the user. A memory unit ( 203 ) stores the image sensed by the image sensor unit ( 201 ) upon reception of an image-capture instruction. A finder display unit ( 204 ) synthesizes the image stored in the memory unit ( 203 ) with an image presently sensed by the image sensor unit ( 201 ) and displays a synthesized image on a finder screen.

Подробнее
14-11-2013 дата публикации

Stereoscopic image generating apparatus, stereoscopic image generating method, and stereoscopic image generating program

Номер: US20130300737A1
Принадлежит: Fujifilm Corp

At least one of parallax images for left and right eyes to be fusionally displayed to perform stereopsis using binocular parallax is generated at low resolution or low sharpness of such a degree that a subject in the parallax image is observable as a stereoscopic image when an observer observes the subject in an observation mode in which the two fusionally displayed parallax images are stereoscopically viewable, and also of such a degree that the subject is recognizable as a plane image when an observer observes the subject in an observation mode in which the two fusionally displayed parallax images are not stereoscopically viewable.

Подробнее
05-12-2013 дата публикации

Apparatus including function to generate stereoscopic image, and method and storage medium for the same

Номер: US20130321592A1
Автор: Toshiya Kuno
Принадлежит: Casio Computer Co Ltd

An apparatus, a method and a storage medium including a function to generate a stereoscopic image are described. According to one implementation, the imaging apparatus includes an imaging lens; a first driving section; an obtaining section and a generating section. The first driving section rotates the imaging lens around an axis along a first direction orthogonal to an optical axis. The obtaining section obtains two image signals corresponding to two optical images which pass through the imaging lens rotated in two states to make a direction of the optical axis relatively different so that a relationship of a position of background with respect to a subject is different. The generating section generates image data of a stereoscopic image based on the two image signals.

Подробнее
19-12-2013 дата публикации

Method and device for obtaining a stereoscopic signal

Номер: US20130335524A1
Принадлежит: Canon Research Center France SAS

The invention relates to a method of obtaining a stereoscopic signal from a sequence of monoscopic images. The method includes a step of obtaining a sequence of monoscopic images having been captured by an image acquisition apparatus in an acquisition mode enabling several images to be shot in the course of a regular movement substantially tangential to the plane of the lens of the acquisition apparatus. That step is followed by a step of forming pairs of images from the sequence of images, each pair being formed on the basis of a predetermined temporal distance then a step of calibrating the images of the pairs formed, so as to improve the visual correspondence between the two images. Finally, a stereoscopic signal is constructed from the pairs so calibrated.

Подробнее
10-01-2019 дата публикации

METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR GENERATING THREE-DIMENSIONAL (3D) IMAGES OF A SCENE

Номер: US20190014307A1
Принадлежит:

Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also includes determining one or more properties of the captured images. The method also includes calculating an offset in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image. Further, the method includes determining that a capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images. The method also includes generating the three-dimensional image based on the corrected first and second still images. 1. A method for generating a three-dimensional image , the method comprising:using at least one processor and at least one image capture device for:capturing a real-time image and a first still image of a scene;displaying the real-time image of the scene on a display;determining one of camera positional offset and pixel offset with respect to the first still image based on at least one of the captured images, an image sensor property, optical property, focal property, and viewing property of the captured images;determining that the at least one image capture device is in the position indicated by the camera positional offset;capturing a second still image;correcting the captured first and second still images to compensate for at least one of camera vertical shift, and rotation on a predetermined axis; andgenerating the three-dimensional image based on the corrected first and second still images.2. The method of claim 1 , further comprising:determining guides based on one of the real-time image and the first still image; ...

Подробнее
03-02-2022 дата публикации

ROAD SURFACE DETECTION DEVICE AND ROAD SURFACE DETECTION PROGRAM

Номер: US20220036097A1
Принадлежит: AISIN CORPORATION

A road surface detection device according to an embodiment includes an image acquisition unit that acquires captured image data output from a stereo camera that captures an imaging area including a road surface on which a vehicle travels, a three-dimensional model generation unit that generates a three-dimensional model of the imaging area including a surface shape of the road surface from a viewpoint of the stereo camera based on the captured image data, and a correction unit that estimates a plane from the three-dimensional model, and corrects the three-dimensional model so as to match an orientation of a normal vector of the plane and a height position of the plane with respect to the stereo camera with a correct value of an orientation of a normal vector of the road surface and a correct value of a height position of the road surface with respect to the stereo camera, respectively. 1. A road surface detection device comprising:an image acquisition unit that acquires captured image data output from a stereo camera that captures an imaging area including a road surface on which a vehicle travels;a three-dimensional model generation unit that generates a three-dimensional model of the imaging area including a surface shape of the road surface from a viewpoint of the stereo camera based on the captured image data; anda correction unit that estimates a plane from the three-dimensional model, and corrects the three-dimensional model so as to match an orientation of a normal vector of the plane and a height position of the plane with respect to the stereo camera with a correct value of an orientation of a normal vector of the road surface and a correct value of a height position of the road surface with respect to the stereo camera, respectively.2. The road surface detection device according to claim 1 , whereinthe correction unit matches the orientation of the normal vector of the plane with the correct value of the orientation of the normal vector of the road surface ...

Подробнее
24-01-2019 дата публикации

Apparatus for Three-Dimensional Measurement of an Object, Method and Computer Program with Image-based Triggering

Номер: US20190025049A1
Принадлежит:

An apparatus for three-dimensional measurement of an object includes a trigger configured to obtain image information from a measurement camera and to trigger, in dependence on image content of the image information, a measurement output or an evaluation of the image information by an evaluator for determining measurement results. Further, a respective method and a respective computer program are described. 1. Apparatus for three-dimensional measurement of an object , comprising:a trigger configured to acquire image information from a measurement camera and to trigger, in dependence on image content of the image information, forwarding of the image information to an evaluator for determining measurement results or an evaluation of the image information by an evaluator for determining measurement results;wherein the trigger is configured to detect when the image content has shifted with respect to a reference image content by at least a predetermined shift or by more than a predetermined shift and to trigger, in dependence on the detection of a shift, forwarding of the image information or the evaluation of the image information by the evaluator for determining measurement results.2. Apparatus according to claim 1 , wherein the trigger is configured to trigger claim 1 , in dependence on the detection of a shift claim 1 , forwarding of the image information or the evaluation of the image information by the evaluator for determining measurement results claim 1 , in order to generate measurement results at a specific spatial distance or to obtain measurement results at equal spatial distances.3. Apparatus according to claim 1 , wherein triggering the measurement output is performed exclusively based on the image content.4. Apparatus according to claim 1 , wherein the trigger is configured to perform image analysis and to trigger the measurement output or the evaluation of the image information in dependence on the image analysis.5. Apparatus according to claim 1 , ...

Подробнее
17-02-2022 дата публикации

ENDOSCOPIC IMAGE PROCESSING APPARATUS, ENDOSCOPE SYSTEM, AND METHOD OF OPERATING ENDOSCOPIC IMAGE PROCESSING APPARATUS

Номер: US20220051472A1
Автор: Takahashi Hideaki
Принадлежит: OLYMPUS CORPORATION

An endoscopic image processing apparatus is configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group of an inside of the object, and includes a processor. The processor estimates a self-position of the image pickup device based on the endoscopic image group, calculates a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation, calculates a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state of an insertion portion inserted into the object, and generates scale information in which the first displacement amount and the second displacement amount are associated with each other. 1. An endoscopic image processing apparatus configured to create a three-dimensional shape model of an object by performing processing on an endoscopic image group obtained by causing an image pickup device provided at a distal end portion of an elongated insertion portion to pick up images of an inside of the object ,the endoscopic image processing apparatus comprising a processor,the processorestimating a self-position of the image pickup device based on the endoscopic image group,calculating a first displacement amount corresponding to a displacement amount of the image pickup device based on an estimation result of the self-position of the image pickup device obtained by the estimation,calculating a second displacement amount corresponding to a displacement amount in a direction parallel to a longitudinal axis direction of the insertion portion, based on a detection signal outputted from an insertion/removal state detection device that detects an insertion/removal state ...

Подробнее
11-02-2016 дата публикации

Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program

Номер: US20160042523A1
Принадлежит: Toppan Printing Co Ltd

A device for measuring a three-dimensional shape includes an imaging unit which sequentially outputs a first two-dimensional image being captured and outputs a second two-dimensional image according to an output instruction, the second two-dimensional image having a setting different from a setting of the first two-dimensional image, an output instruction generation unit which generates the output instruction based on a shape defect ratio obtained by generating a three-dimensional model based on the second two-dimensional image outputted by the imaging unit and viewing a three-dimensional model from a viewpoint at which the first two-dimensional image is captured, and a storage unit which stores the second two-dimensional image outputted by the imaging unit.

Подробнее
24-02-2022 дата публикации

CONTROL DEVICE AND MASTER SLAVE SYSTEM

Номер: US20220060678A1
Принадлежит: Sony Group Corporation

Provided is a control device including a control unit that calculates a first positional relationship between an eye of an observer observing an object displayed on a display unit and a first point in a master-side three-dimensional coordinate system, and controls an imaging unit that images the object so that a second positional relationship between the imaging unit and a second point corresponding to the first point in a slave-side three-dimensional coordinate system corresponds to the first positional relationship. 1. A control device comprising a control unit that:calculates a first positional relationship between an eye of an observer observing an object displayed on a display unit and a first point in a master-side three-dimensional coordinate system; andcontrols an imaging unit that images the object so that a second positional relationship between the imaging unit and a second point corresponding to the first point in a slave-side three-dimensional coordinate system corresponds to the first positional relationship.2. The control device according to claim 1 , wherein the control unit:acquires information regarding a first observation direction of the eye of the observer with respect to the first point in the master-side three-dimensional coordinate system; andcontrols a second observation direction of the imaging unit with respect to the second point so that the second observation direction corresponds to the first observation direction in the slave-side three-dimensional coordinate system.3. The control device according to claim 2 , whereinthe control unit controls the second observation direction so that the second observation direction and the first observation direction are substantially the same.4. The control device according to claim 3 , whereinthe control unit controls the second observation direction so that an angle of the first observation direction with respect to a gravity direction and an angle of the second observation direction with respect to ...

Подробнее
06-02-2020 дата публикации

Image processing method and image processing device

Номер: US20200043222A1
Принадлежит: Shenzhen Royole Technologies Co Ltd

A picture processing method and device. The method comprises: a picture processing device collecting N pictures for the same object at K positions, K being an integer greater than or equal to 1, and N being greater than or equal to K (S101); and the picture processing device horizontally splicing at least two of the N pictures to generate a 3D picture, the horizontal splicing being splicing two pictures with overlapping parts into one seamless picture (S102). The method and device are advantageous for solving the problems of high cost, poor visual effect, complicated 2D/3D display switching and low practicability.

Подробнее
16-02-2017 дата публикации

Systems and methods for obtaining accurate 3d modeling data using uavs for cell sites

Номер: US20170046873A1
Принадлежит: Etak Systems LLC

Systems and methods for developing a three-dimensional (3D) model of a cell site using an Unmanned Aerial Vehicle (UAV) to obtain photos and/or video include preparing the UAV for flight and programming an autonomous flight path about a cell tower at the cell site, wherein the autonomous flight path comprises a substantially circular flight path about the cell tower with one or more cameras on the UAV facing the cell tower; flying the UAV around the cell tower in a plurality of orbits comprising at least four orbits each with a different set of characteristics of altitude, radius, and camera angle, wherein the flying comprises of at least four orbits for a monopole cell tower and at least five orbits for a self-support/guyed cell tower; obtaining photos and/or video of the cell tower, the cell site, and cell site components during each of the plurality of orbits; and using the photos and/or video to develop the point cloud three-dimensional (3D) model of the cell site.

Подробнее
03-03-2022 дата публикации

METHOD AND APPARATUS FOR 3D RECONSTRUCTION OF PLANES PERPENDICULAR TO GROUND

Номер: US20220070433A1
Принадлежит:

An apparatus for three-dimension (3D) reconstruction includes; an event trigger module that determines whether to perform a 3D reconstruction, a motion estimation module than obtains motion information, and a reconstruction module that receives a first front image having a first view point and a second front image having a second view point, and obtains 3D coordinate values of a camera coordinate system based on the first front image and the second front image. Here, each of the first front image and the second front image includes planes, and each of the planes is perpendicular to the ground and includes feature points.

Подробнее
05-03-2015 дата публикации

Imaging apparatus

Номер: US20150061066A1
Автор: Kazuaki Murayama
Принадлежит: Olympus Corp

Provided is an imaging apparatus having a plurality of light receiving parts for each one microlens in order for capturing a three-dimensional image, while being capable of obtaining a more natural image when creating a two-dimensional image. The imaging apparatus includes: a microlens array ( 2 ) having a plurality of microlenses ( 20 ) regularly aligned two-dimensionally; an imaging lens for imaging light from a subject onto the microlens array ( 2 ); and a plurality of light receiving parts ( 22 L, 22 R) disposed for each of the plurality of microlenses ( 20 ). The plurality of light receiving parts ( 22 L, 22 R) associated with each microlens ( 20 ) receive the light from the subject that has been imaged onto the microlens and subject the light to photoelectric conversion. The imaging lens has a pupil which is disposed as being out of conjugation with a light receiving plane of the light receiving parts ( 22 L, 22 R).

Подробнее
10-03-2022 дата публикации

DISPLAY SYSTEM FOR CAPSULE ENDOSCOPIC IMAGE AND METHOD FOR GENERATING 3D PANORAMIC VIEW

Номер: US20220078343A1
Принадлежит:

The present disclosure relates to a display system including a capsule image view, a 3D mini-map, and a 3D panoramic view, and a method of generating a 3D panoramic view. Specifically, according to the present disclosure, it is possible to infer the shape of an organ using a 3D mini-map and to simultaneously identify whether or not the capsule endoscope captures the images, and information on the position and posture of the capsule endoscope at primary captured points by visualizing the actual movement path of the capsule endoscope, thereby improving the accuracy of examination, and since multiple 2D images captured by a single capsule endoscope are able to be viewed as a single 3D panoramic image without changing the structure of the capsule endoscope, it is economical and the viewing angle of the image is able to be increased, thereby reducing the examination time and fatigue of the examiner. 1. A display system comprising:a storage unit configured to receive images captured by a capsule endoscope;a controller configured to convert the images into image data;a manipulation unit configured to generate a manipulation command; anda display unit configured to display the image data.2. The display system of claim 1 , wherein the image captured by the capsule endoscope records any one or more pieces of information selected from the group consisting of position information and posture information of the capsule endoscope.3. The display system of claim 1 , wherein the image data comprises a capsule image view claim 1 , and further comprises any one or more selected from the group consisting of a 3D mini-map and a 3D panoramic view.4. The display system of claim 3 , wherein the 3D mini-map is configured to display any one or more pieces of information selected from the group consisting of path information of the capsule endoscope claim 3 , position information of the capsule endoscope claim 3 , posture information of the capsule endoscope claim 3 , and 3D-panoramic view ...

Подробнее
27-02-2020 дата публикации

NAVIGATING AMONG IMAGES OF AN OBJECT IN 3D SPACE

Номер: US20200065558A1
Принадлежит:

A three-dimensional model of an object is employed to aid in navigation among a number of images of the object taken from various viewpoints. In general, an image of an object such as a digital photograph is displayed in a user interface or the like. When a user selects a point within the display that corresponds to a location on the surface of the object, another image may be identified that provides a better view of the object. In order to maintain user orientation to the subject matter while navigating to this destination viewpoint, the display may switch to a model view and a fly-over to the destination viewpoint may be animated using the model. When the destination viewpoint is reached, the display may return to an image view for further inspection, marking, or other manipulation by the user. 1. A method of navigating among a number of images taken of an object , the method comprising:displaying a first image of an object, the first image selected from a number of images taken of the object, the first image showing a surface of the object from a first viewpoint;receiving a selection of a location on the surface of the object;selecting a second image of the object from the number of images taken of the object, the second image selected to provide an improved view of the location on the surface of the object from a second viewpoint;rendering an animation of a spatial transition from the first viewpoint to the second viewpoint using a three-dimensional model of the object;displaying the animation; anddisplaying the second image upon reaching the second viewpoint in the animation.2. The method of wherein receiving the selection of the location includes receiving the selection from within a graphical user interface.3. The method of wherein receiving the selection includes at least one of a mouse input and a touch screen input.4. The method of wherein the three-dimensional model includes a texture map that is derived from at least one image of the number of images.5. ...

Подробнее
27-02-2020 дата публикации

ROBOTIC MAPPING FOR TAPE LIBRARIES

Номер: US20200065952A1

A method for diagnosing problems in tape libraries is disclosed. In one embodiment, such a method includes attaching, to a robot of a tape library, one or more scanning devices to scan internal components and features of the tape library. As the robot moves within the tape library, the method captures, using the one or more scanning devices, three-dimensional (3D) data describing physical locations of the internal features and components. This 3D data is compiled to generate a map of the internal components and features. The method compares the map to a 3D model of the tape library to identify differences between the map and the 3D model. Problems within the tape library may be identified from these differences. A corresponding system and computer program product are also disclosed. 1. A method for diagnosing problems in tape libraries , the method comprising:attaching, to a robot of a tape library, at least one scanning device to scan internal components of the tape library;as the robot moves within the tape library, capturing, using the at least one scanning device, three-dimensional (3D) data describing physical locations of the internal components;compiling the 3D data to generate a map of the internal components;comparing the map to a 3D model of the tape library to identify differences between the map and the 3D model; andidentifying problems within the tape library from the differences.2. The method of claim 1 , wherein the at least one scanning device includes at least one infrared (IR) scanning device.3. The method of claim 1 , wherein the at least one scanning device gathers temperature data associated with the internal components.4. The method of claim 3 , further comprising comparing temperature data associated with the map to temperature data associated with the 3D model.5. The method of claim 4 , wherein identifying problems further comprises identifying differences between the temperature data associated with the map and the temperature data ...

Подробнее
27-02-2020 дата публикации

Aerial Image Acquisition Method and System for Investigating Traffic Accident Site by Unmanned Aerial Vehicle

Номер: US20200066169A1
Автор: Chen Sijia, LI Xiying
Принадлежит:

Disclosed are an aerial image acquisition method and a system for investigating a traffic accident site by an unmanned aerial vehicle. The method comprises selecting a corresponding unmanned aerial vehicle low-altitude shooting scheme of traffic accident site according to whether three-dimensional site reconstruction or site animation simulation is needed; selecting and calculating shooting parameters of the unmanned aerial vehicle according to the unmanned aerial vehicle low-altitude shooting scheme selected; and shooting the traffic accident site according to the unmanned aerial vehicle low-altitude shooting scheme selected and the shooting parameters of the unmanned aerial vehicle, to obtain an aerial image sequence of the traffic accident site. 110-. (canceled)11. An aerial image acquisition method for investigating a traffic accident site by an unmanned aerial vehicle , comprising the steps of:selecting a corresponding unmanned aerial vehicle low-altitude shooting scheme of traffic accident site according to whether three-dimensional site reconstruction or site animation simulation is needed, the unmanned aerial vehicle low-altitude shooting scheme of traffic accident site including a global-scope “S-shaped” itinerant vertical high-angle shooting scheme and a combined shooting scheme, and the combined shooting scheme being formed by superimposing the global-scope “S-shaped” itinerant vertical high-angle shooting scheme with a partial three-dimensional site multilevel surrounding inclined shooting scheme;selecting and calculating shooting parameters of the unmanned aerial vehicle according to the unmanned aerial vehicle low-altitude shooting scheme selected; andshooting the traffic accident site according to the unmanned aerial vehicle low-altitude shooting scheme selected and the shooting parameters of the unmanned aerial vehicle, to obtain an aerial image sequence of the traffic accident site.12. The aerial image acquisition method for investigating a traffic ...

Подробнее
28-02-2019 дата публикации

PHOTOGRAMMETRIC SYSTEM AND PHOTOGRAMMETRIC METHOD

Номер: US20190068952A1
Автор: NISHITA Nobuyuki
Принадлежит: TOPCON CORPORATION

A photogrammetric system includes a camera installed in a movable body and including a shutter unit that moves an exposed portion across an imaging surface from one side to the other side for exposure to capture an image, a measuring device configured to measure a position at which the camera captures the image, and a photogrammetry data generator configured to extract a feature point from the image captured by the camera, calculate a feature point capture position at which the feature point is captured based on a measurement result of the measuring device and a moving time of the exposed portion, and generate photogrammetry data including the feature point capture position. 1. A photogrammetric system comprising:a camera installed in a movable body, the camera including a shutter unit that unidirectionally moves an exposed portion across an imaging surface from one side to the other side, the camera being configured to capture an image by causing the shutter unit to expose the imaging surface from the one side to the other side;a capture position measuring unit configured to measure a capture position at which the camera captures the image; anda photogrammetry data generator configured to extract a feature point from the image captured by the camera, calculate a feature point capture position at which the feature point is captured based on the capture position measured by the capture position measuring unit and a moving time where the exposed portion is moved by the shutter unit, and generate data for use in photogrammetry, the data including the feature point capture position.2. The photogrammetric system of claim 1 , wherein the photogrammetry data generator extracts a feature point from the image captured by the camera claim 1 , calculates an offset time that is a time period from when the camera starts capturing the image to when a portion of the feature point is exposed claim 1 , from a relation between a position of the feature point in the image and the ...

Подробнее
19-03-2015 дата публикации

Provision of stereoscopic video camera views to aircraft passengers

Номер: US20150077516A1
Автор: Gerald Coto-Lopez
Принадлежит: AIRBUS OPERATIONS GMBH

A processing unit for providing stereoscopic video data on board an aircraft, a system comprising the processing unit, an aircraft comprising the system, a method for providing stereoscopic video data on board an aircraft, as well as a computer program for executing the method. The processing unit comprises: a processing component configured to process at least a part of first video data and second video data to form stereoscopic video data, wherein the first and the second video data represent video images of one or more views external to the aircraft; and a forwarding component configured to forward the formed stereoscopic video data to one or more display devices provided on board the aircraft for presentation to passengers of the aircraft.

Подробнее
05-03-2020 дата публикации

TRACKING-DISTANCE-MEASURING SYSTEM FOR TORSO TRACKING AND METHOD THEREOF

Номер: US20200072972A1
Автор: LIU Chien-Hung
Принадлежит:

A tracking-distance-measuring system capable of tracking a torso object is provided. The tracking-distance-measuring system includes: an image sensor, a controller, a distance-measuring device, and an actuator device. The image sensor is configured to capture an input image. The controller is configured to analyze the input image to recognize a torso object from the input image, and calculate an offset distance between a center of the torso object and a central axis of the input image. The actuator device is configured to carry the distance-measuring device. The controller controls the actuator device to calibrate an offset angle between the distance-measuring device and the recognized torso object according to the offset distance. In response to calibrating the offset angle, the distance-measuring device emits energy and receives reflected energy to detect an object distance of the torso object. 1. A tracking-distance-measuring system capable of tracking a torso object , comprising:an image sensor, configured to capture an input image;a controller, configured to analyze the input image to recognize a torso object from the input image, and calculate an offset distance between a center of the torso object and a central axis of the input image;a distance-measuring device, coupled to the controller; andan actuator device, coupled to the controller, and configured to carry the distance-measuring device;wherein the controller controls the actuator device to calibrate an offset angle between the distance-measuring device and the recognized torso object according to the offset distance;wherein in response to calibrating the offset angle, the distance-measuring device emits energy and receives reflected energy to detect an object distance of the torso object.2. The tracking-distance-measuring system as claimed in claim 1 , wherein the distance-measuring device comprises an ultrasonic emitter and an ultrasonic receiver claim 1 ,wherein the distance-measuring device controls ...

Подробнее
07-03-2019 дата публикации

DEVICE AND METHOD TO RECONSTRUCT IN 3D THE SURFACE OF A COMPLETE LOOP AROUND A SUBJECT

Номер: US20190075285A1
Автор: THIRION Jean-Philippe
Принадлежит: QuantifiCare

The device and method are intended for reconstructing in 3-Dimensions a complete 360° loop of a subject (). 130. A device to achieve 3D reconstruction of a surface of a 360° loop of an object or a body () , comprising:{'b': '10', 'at least one passive stereovision camera 3D equipped with a double optics (), and'}{'b': 10', '30', '21', '10', '30', '30', '10, 'a turn table () which is carrying the object or body to be imaged () or, alternatively, a rotating frame () carrying the at least one 3D camera () and rotating around the object or body to be imaged (), configured such that the object or body () is within the field of view of the at least one 3D camera (), and'}{'b': 40', '20', '41', '41, 'a control mean () that manage the rotation of the turn table () or, respectively, a control mean () that manage the rotation of the rotating frame (), and'}{'b': 50', '10', '30, 'a control mean () for remote triggering of the at least one 3D camera () and for acquiring stereo pairs of images according to viewing angles covering a 360° loop around the object or body (), and'}{'b': '60', 'computation means () for the 3D reconstruction of 3D surfaces from the acquired stereo pairs of images, and'}{'b': 80', '30, 'computation means () for the stitching in 3D of the acquired 3D surfaces into a comprehensive 3D representation of a complete 360° loop of the surface of the imaged object or body ().'}280. The device of claim 1 , wherein the computation means () used for stitching are comprising running a matching algorithm to match the reconstructed 3D surfaces in order to stitch these surfaces together once matched.380. The device of claim 1 , wherein the computation means () used for stitching is comprising a looping algorithm which is adjusting the relative position of the successive reconstructed 3D surfaces in order to spread evenly the matching differences between the matched successive reconstructed 3D surfaces.4203020104020. The device of claim 1 , comprising a turn table () ...

Подробнее
14-03-2019 дата публикации

INTRAORAL SCANNER WITH DENTAL DIAGNOSTICS CAPABILITIES

Номер: US20190076026A1
Принадлежит:

Methods and apparatuses for generating a model of a subject's teeth. Described herein are intraoral scanning methods and apparatuses for generating a three-dimensional model of a subject's intraoral region (e.g., teeth) including both surface features and internal features. These methods and apparatuses may be used for identifying and evaluating lesions, caries and cracks in the teeth. Any of these methods and apparatuses may use minimum scattering coefficients and/or segmentation to form a volumetric model of the teeth. 1. A method for generating a three-dimensional (3D) volumetric model of a subject's teeth using an intraoral scanner , the method comprising:taking a plurality of images into the teeth using an infrared (IR) wavelength with the intraoral scanner as the intraoral scanner is moved over the teeth, so that multiple images of a same internal region of the teeth are imaged;determining, for each of the plurality of images into the teeth, a position of the intraoral scanner relative to the subject's teeth;using the position of the intraoral scanner to project a plurality of grid points corresponding to an inner volume of the patient's teeth on each of the plurality of images into the teeth;determining a scattering coefficient for the projected grid points from the plurality of images into the teeth; andforming the 3D volumetric model of the subject's teeth including internal features using the scattering coefficient for the projected grid points.2. The method of claim 1 , wherein determining the scatter coefficient for the projected grid points comprises determining the minimum scatter coefficient for each of the projected grid points.3. The method of claim 1 , wherein the IR wavelength comprises a near-IR wavelength.4. The method of claim 1 , wherein forming the 3D volumetric model comprises including a model of the gums relative to the teeth.5. The method of claim 1 , further comprising capturing a 3D color model of the teeth as the intraoral scanner is ...

Подробнее
31-03-2022 дата публикации

METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT

Номер: US20220103709A1
Принадлежит: ML Netherlands C.V.

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display. Additional techniques include processing and formatting of the three-dimensional representation data to be printed by a three-dimensional printer so a three-dimensional model of the object may be formed. 1. A portable electronic device , comprising:a camera; forming multiple localized point clouds, each localized point cloud comprising probabilities associated with respective points in the localized point cloud indicative of a probability that the representation accurately indicates a location of a first feature of a plurality of features;', 'fusing the localized point clouds into a combined point cloud;', 'adjusting one or more points in the combined point cloud to reduce inconsistency; and', 'smoothing the combined point cloud based at least in part on the probabilities associated with the localized point clouds., 'process a plurality of images acquired with the camera to form a representation of an object depicted in the plurality of images, wherein the representation indicates a surface of the object in three-dimensional space, and wherein the processing comprises, 'at least one ...

Подробнее
05-05-2022 дата публикации

METHOD FOR PLANNING THREE-DIMENSIONAL SCANNING VIEWPOINT, DEVICE FOR PLANNING THREE-DIMENSIONAL SCANNING VIEWPOINT, AND COMPUTER READABLE STORAGE MEDIUM

Номер: US20220139040A1
Принадлежит:

Disclosed in the embodiments of the present disclosure are a method for planning three-dimensional scanning viewpoint, a device for planning three-dimensional scanning viewpoint and a computer readable storage medium. After a low-precision digitalized model of an object to be scanned is acquired, viewpoint planning calculation is performed, on the basis of a viewpoint planning algorithm, on point cloud data in the low-precision digitalized model, and then the positions and line-of-sight directions of a plurality of viewpoints in space are calculated when a three-dimensional sensor needs to perform three-dimensional scanning on said object. Calculating viewpoints of a three-dimensional sensor by means of a viewpoint planning algorithm can effectively improve the accuracy and scientific nature of sensor posture determination, greatly improving the efficiency of viewpoint planning, and reducing the time consumed in the whole three-dimensional measurement process.

Подробнее
21-03-2019 дата публикации

Image capturing device and method thereof

Номер: US20190089945A1
Принадлежит: Telefonaktiebolaget LM Ericsson AB

An image capturing device is provided comprising an image sensor for capturing a first image of a scene, a light source for illuminating the scene with a first flash of coded light, and a network interface for communicating with a communications network and/or a further image capturing device. The device is operative to encode information into the first flash, enabling retrieval of the first image from a first data storage, capture the first image, and store the first image in the first data storage. Optionally, the device may be operative to detect a second flash of coded light emitted by the further image capturing device, decode information enabling retrieval of a second image captured by the further image capturing device from a second data storage, retrieve the second image, and create a 3D model from the first image and the second image.

Подробнее
05-04-2018 дата публикации

Detection method and detection apparatus for detecting three-dimensional position of object

Номер: US20180095549A1
Принадлежит: FANUC Corp

A detection apparatus for detecting a three-dimensional position of an object includes a feature point detecting unit that, with consecutive or at least alternately consecutive two images among multiple images sequentially imaged when a robot is moving being a first image and a second image, detects multiple feature points in the second image including one feature point detected in the first image; a distance calculating unit that calculates a distance between the one feature point of the first image and the multiple feature points of the second image; and a feature point determining unit that determines a feature point for which the distance is the shortest. With consecutive or at least alternately consecutive next two images being the first image and the second image, processing for determining a feature point for which the distance is the shortest is repeated, thereby tracking the feature points of the object.

Подробнее
08-04-2021 дата публикации

Operating Method of Three-Dimensional Facial Diagnosis Apparatus

Номер: US20210105452A1
Принадлежит: KOREA INSTITUTE OF ORIENTAL MEDICINE

The present invention relates to a three-dimensional face image capturing method and comprises the steps of: capturing a face region of a user in a direction from a right chin to a left forehead; capturing the face region in a direction from a left forehead to a left chin; capturing the face region in a direction from a left chin to a right forehead; and capturing the face region in a direction from a right forehead to a center of face. 1. An operating method of a three-dimensional (3D) face diagnosis apparatus for obtaining 3D image information including face depth information of the front and side views of a face of a user by capturing the face by elevating a camera in the direction of a vertical axis and rotating the camera about the vertical axis along a capturing trajectory around the face of the user by means of a camera moving instrument , the operating method performed at least temporarily by a computer , the operating method comprising:capturing a face region of a user in the direction from center of the face to the right chin;capturing the face region in the direction from the right chin to the right forehead;capturing the face region in the direction from the right forehead to the left chin;capturing the face region in the direction from the left chin to the left forehead; andcapturing the face region in the direction from the left forehead to center of the face.2. The operating method of claim 1 , wherein a moving path in the capturing of the face region in the direction from the right forehead to the left chin passes through center of the face of the user.3. The operating method of claim 2 , wherein a moving path in the capturing of the face region in the direction from the left chin to the left forehead and a moving path in the capturing of the face region in the direction from the right chin to the right forehead convexly move outward from the face of the user.4. The operating method of claim 3 , wherein the capturings of the face region comprise ...

Подробнее
26-03-2020 дата публикации

ROBOTIC LASER GUIDED SCANNING SYSTEMS AND METHODS OF SCANNING

Номер: US20200099917A1
Автор: Lee Seng Fook
Принадлежит:

A robotic laser guided system for scanning of an object includes a processor configured to define a laser center co-ordinate and a relative width for the object from a first shot of the object; and define an exact position for taking each of the one or more shots after the first shot. The exact position for taking the one or more shots is defined based on the laser center co-ordinate and the relative width. The system includes a feedback module for providing at least one feedback about the exact position for taking the shots; a motion-enabling module comprising at least one wheel for enabling a movement to the exact position for taking the shots one by one based on the feedback; cameras for capturing the shots. The processor may stitch and process the shots to generate at least one 3D model comprising a scanned image of the object. 2. The laser guided scanning system of further comprising a laser light configured to switch from a red color to a green color and vice versa claim 1 , wherein the laser light is further configured to indicate the exact position for taking each of the one or more shots separately by turning to the green color.3. The laser guided scanning system of claim 1 , wherein the one or more cameras takes the one or more shots of the object one by one based on the laser center co-ordinate and a relative width of the first shot.4. The laser guided scanning system of claim 2 , wherein the processor is further configured to define a new position co-ordinate for taking a next shot of the one or more shots based on the laser center co-ordinate and the relative width of the first shot.5. The laser guided scanning system of claim 1 , wherein the object comprises at least one of a symmetrical object and an unsymmetrical object.6. A scanning system for a three dimensional (3D) scanning of an object claim 1 , comprising: define a laser center co-ordinate for the object from a first shot of the object, wherein the object comprising at least one of a ...

Подробнее
10-07-2014 дата публикации

Solid-state imaging apparatus

Номер: US20140191356A1
Принадлежит: Panasonic Corp

A solid-state imaging apparatus is disclosed in which, in a first unit cell, light is collected to maximize an amount of light received when the light is incident at a first angle-of-incidence, and in a second unit cell adjacent to the first unit cell, light is collected to maximize an amount of light received when the light is incident at a second angle-of-incidence, the amount of light received when the light is incident at a third angle-of-incidence on the first unit cell is equal to the amount of light received when the light is incident at the third angle-of-incidence on the second unit cell, the first angle-of-incidence is greater than the third angle-of-incidence by a predetermined amount, and the second angle-of-incidence is smaller than the third angle-of-incidence by the predetermined amount.

Подробнее
26-04-2018 дата публикации

System and method for generating and releasing stereoscopic video films

Номер: US20180115768A1
Автор: Alexander KLESZCZ

The invention relates to a system ( 1 ) for releasing a stereoscopic video film ( 2 ). The system ( 1 ) has a data processing unit ( 3 ), which is configured to receive and to process a monoscopic video film ( 7 ), and to release the stereoscopic video film ( 2 ). The monoscopic video film ( 7 ) has been recorded using a video recording device ( 4 ) having a one single objective ( 8 ). The system ( 1 ) is characterized in that the data processing unit ( 3 ) is configured to receive and to evaluate a motion information ( 14 ) allocated to the monoscopic video film ( 7 ), or to determine the motion information ( 14 ) to be allocated to the monoscopic video film ( 7 ). The motion information ( 14 ) is characterized by a motion direction ( 27 ) of the video recording device ( 4 ) in regard to a filmed object ( 11 ). The data processing unit ( 3 ) is configured to generate the stereoscopic video film ( 2 ) from two content-identical and temporally delayed monoscopic video films.

Подробнее
09-06-2022 дата публикации

Device for detecting a three-dimensional structure

Номер: US20220180504A1
Принадлежит: SENSWORK GMBH

The invention relates to a device for detecting a three-dimensional structure, comprising an imaging device, especially a camera, which is adjustable in the z-direction, a control device which is designed to record an image of a first plane and, after adjustment of the imaging device in the z-direction, to record an image of a second plane, and an evaluation device which is designed to interpolate a sub-plane between the first plane and the second plane.

Подробнее
25-04-2019 дата публикации

Robot motion planning for photogrammetry

Номер: US20190122425A1
Автор: Mason E. SHEFFIELD
Принадлежит: Lowes Companies Inc

Described herein are systems for generating 3D models using imaging data obtained using an array of light projectors, at least one object boundary detector, and a robotic member with an end effector. A first point cloud of data for an object may be generated based on boundary information obtained by the object boundary detector(s). Dimensions for the object may be determined based on the first point cloud of data. A second point cloud of data may be generated based on the dimensions for the object and a configuration of light projectors where the second point cloud corresponds to potential coordinates for a location where the robotic member and end effector can be positioned along a path around the object to capture the image data of the object. A path may be generated to avoid collision between the object and the robotic member or end effector while optimizing the number of capture location points within the second point cloud of data.

Подробнее
25-04-2019 дата публикации

CAPTURING LIGHT-FIELD IMAGES WITH UNEVEN AND/OR INCOMPLETE ANGULAR SAMPLING

Номер: US20190124318A1
Принадлежит:

A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension. 129.-. (canceled)30. A light-field camera comprising:an aperture configured to receive incoming light and having a rectangular exit pupil;an image sensor;a microlens array disposed between the aperture and the image sensor, wherein the microlens array comprises a plurality of microlenses arranged in a plurality of rows, each row arranged at a non-zero acute angle relative to the rectangular exit pupil; andwherein the image sensor is configured to generate light-field data based on the incoming light received through the microlens array.31. The light-field camera of claim 30 , wherein the plurality of microlenses comprises a plurality of rectangular microlenses.32. The light-field camera of claim 30 , wherein the plurality of microlenses comprises a plurality of circular microlenses.33. The light-field camera of claim 30 , wherein the plurality of microlenses comprises a plurality of hexagonal microlenses.34. The light-field camera of claim 30 , wherein a width of the rectangular exit pupil is greater than a length of the rectangular exit pupil.35. The light-field camera of claim 30 , wherein the rectangular exit pupil is shared and oriented claim 30 , relative to the microlens array claim 30 , such that the incoming light received through the microlens array forms a tessellated pattern at the image sensor.36. The light-field camera of claim 30 , ...

Подробнее
27-05-2021 дата публикации

MOBILE TERMINAL AND METHOD FOR CONTROLLING THE SAME

Номер: US20210160478A1
Принадлежит: LG ELECTRONICS INC.

There are provided a mobile terminal including light emitting devices and a method for controlling the same. A mobile terminal includes a camera, a light emitting unit including a plurality of light emitting devices, the light emitting unit emitting light toward a space corresponding to an image received through the camera, and a controller for controlling light emitting devices, which emit light toward a space corresponding to a portion of the image among the plurality of light emitting devices, to be used in extracting depth information of the portion. 1. An electronic device comprising:a light emitting unit comprising a plurality of light generating elements, wherein the plurality of light generating elements are grouped into a plurality of groups;a camera;a controller operably connected to the light emitting unit and the camera and configured to:control at least one of the plurality of groups to emit light to an object, anddetermine depth information of the object using the at least one image captured via the camera based on the light emitted from the at least one of the plurality of groups to the object,wherein the plurality of light generating elements included in each of the plurality of groups are arranged to form different patterns such that all of the plurality of groups emit different light patterns,wherein the plurality of light generating elements included in each of the plurality of groups generate light to form the different patterns while at least one image of the object is received from the camera,wherein each pattern of the different patterns is formed based on arrangement of the plurality of light generating elements included in a corresponding group of the plurality of groups, andwherein the controller is further configured to control two or more of the plurality of groups to emit light simultaneously during at least a portion of time that the at least one of the plurality of groups are emitting light to the object.2. The electronic device of ...

Подробнее
09-05-2019 дата публикации

ADAPTIVE THREE-DIMENSIONAL IMAGING SYSTEM AND METHODS AND USES THEREOF

Номер: US20190141226A1
Автор: Lee Ying Chiu Herbert
Принадлежит: Marvel Research Ltd.

An adaptive 3D imaging system comprising an imaging part and a lens part detachably connected thereto; the imaging part comprising a sensor and a reflector configured to transmit a plurality of captured light field images to the sensor; wherein the lens part comprising a first camera lens positioned at a first end of the lens part, a second camera lens positioned at a second end of the lens part, an entrance pupil plane and matching device positioned between the first camera lens and the second camera lens and being adaptive to different focal lengths of the second camera lens, an internal reflection unit positioned between the first camera lens and the entrance pupil plane and matching device and configured to decompose the captured light field images and refract them into a plurality of multiple secondary images with different angular offsets. Methods and uses involving the 3D imaging system are included. 1. An adaptive 3D imaging system comprising:an imaging part and a lens part detachably connected thereto, wherein the lens part has a first end and a second end;the imaging part comprising a sensor and a reflector configured to transmit a plurality of captured light field images to the sensor;wherein the lens part comprises a first camera lens positioned at the first end of the lens part, a second camera lens positioned at the second end of the lens part, an entrance pupil plane and matching device positioned between the first camera lens and the second camera lens and being adaptive to different focal lengths of the second camera lens, an internal reflection unit positioned between the first camera lens and the entrance pupil plane and matching device and configured to decompose the captured light field images and refract them into a plurality of secondary images with different angular offsets.2. The adaptive 3D imaging system of claim 1 , wherein the imaging part further comprises a compound eye lens configured to transmit the plurality of captured light field ...

Подробнее
04-06-2015 дата публикации

Imaging device

Номер: US20150156478A1
Автор: Shuji Ono
Принадлежит: Fujifilm Corp

An imaging device includes a multifocal main lens having different focal distances for a plurality of regions, an image sensor having a plurality of pixels configured of two-dimensionally arranged photoelectric converting elements, a multifocal lens array having a plurality of microlens groups at different focal distances disposed on an incident plane side of the image sensor, and an image obtaining device which obtains from the image sensor, a plurality of images for each of the focal distances obtained by combining the multifocal main lens and the plurality of microlens groups at different focal distances.

Подробнее
07-05-2020 дата публикации

PORTABLE 3D SCANNING SYSTEMS AND SCANNING METHODS

Номер: US20200145639A1
Автор: Lee Seng Fook
Принадлежит:

A portable 3D scanner including a camera for capturing a plurality of image shots of an object for scanning is provided. The camera being mounted on a stack structure configured to expand and close for adjusting a height and an angle of the camera for taking at least one image shot of the plurality of image shots of the object. The stack structure is mounted over a base comprising wheels for movement of the base to multiple positions. The further comprising a processor for: determining a laser center of the object from a first image shot; determining a radius between the object and the a center of the camera, the base may move around the object based on the radius for covering a 360-Degree view of the object; and processing and stitching the plurality of image shots for generating a 3D scanned image of the object. 1. A portable 3D scanner comprising:at least one camera for capturing a plurality of image shots of an object for scanning, wherein the at least one camera is mounted on a stack structure configured to expand and close for adjusting a height and an angle of the at least one camera for taking at least one image shot of the plurality of image shots of the object, wherein the stack structure is mounted over a base comprising one or more wheels for movement of the base to the one or more positions; and determining a laser center of the object from a first image shot of the plurality of image shots;', 'determining a radius between the object and a center of the at least one camera, wherein the base moves around the object based on the radius for covering a 360-Degree view of the object; and', 'processing and stitching the plurality of image shots for generating a 3D scanned image of the object., 'a processor for2. The portable 3D scanner of claim 1 , wherein the processor is configured to determine the one or more position coordinates for taking the plurality of image shots of the object for completing the 360-Degree view of the object; and enable a movement of ...

Подробнее
09-06-2016 дата публикации

Tear line 3d measurement apparatus and method

Номер: US20160161246A1

A non-destructive method of measuring tear lines formed in a surface of a resilient automotive trim panel configured for overlaying an inflatable safety device includes the steps of periodically selecting a trim panel for testing from a flow of in-process trim panels, mounting the selected trim panel to a mounting jig configured to support a region of the selected trim panel adjacent a tear line and to temporally fold said selected trim panel to expose opposed edges forming at least a portion of the tear line, scanning a 3D image of the opposed edges, storing said 3D image as data in an associated processor, and removing the selected trim panel from the mounting jig. The mounting jig includes a base forming upwardly facing longitudinally elongated converging guide surfaces intersecting at a common apex, and a cover member forming downwardly facing longitudinally elongated converging support surfaces intersecting at a common apex.

Подробнее
28-08-2014 дата публикации

Laser frame tracer

Номер: US20140240460A1
Принадлежит: Pro Fit Optix Inc

A laser frame tracer ( 12 ) including a laser measuring unit ( 20 ) with a laser ( 36 ) and one or more cameras ( 38, 40 ) for optically measuring dimensions of eyeglass frames ( 10 ). A frame carrier ( 22 ) is provided for moving the eyeglass frames ( 10 ) through a laser line emitted by the laser ( 36 ). The frame carrier ( 22 ) includes a linear carriage ( 44 ) and a rotary carriage ( 88 ). Movement of the linear carriage ( 44 ) and the rotary carriage ( 88 ) are controlled by an on-board computer ( 116 ) which collects image data from the one or more cameras ( 38, 40 ). Image data is processed to determine a 3D model from which selected dimensions for the eyeglass frames ( 10 ) may be measured. The dimensions may be stored in a cloud database for access by others in cutting lenses to fit the eyeglass frames ( 10 ).

Подробнее
17-06-2021 дата публикации

DEVICE AND METHOD FOR ASSISTING IN 3D SCANNING A SUBJECT

Номер: US20210185295A1
Принадлежит:

A device or assisting in performing a 3D scan of a subject includes a camera structured to capture an image of the subject, an indication device structured to provide an indication, and a processing unit structured to determine a difference between a location of the camera and a desired location of the camera based on the captured image and to control the indication device to provide the indication based on the difference between the location of the camera and the desired location of the camera. 1. A device for performing a 3D scan of a subject , the device comprising:a camera structured to capture an image of the subject;an indication device structured to provide an indication; anda processing unit structured to determine a difference between a location of the camera and a desired location of the camera based on the captured image and to control the indication device to provide the indication based on the difference between the location of the camera and the desired location of the camera.2. The device of claim 1 , wherein the processing unit is structured to determine the difference between the camera and a desired location of the camera by determining a difference between a location of subject in the captured image with a desired location of subject in the captured image.3. The device of claim 2 , wherein the processing unit is structured to determine one or more landmarks of the subject in the captured image.4. The device of claim 1 , wherein the indication is at least one of a haptic and an audible indication.5. The device of claim 1 , wherein the indication is a visual indication.6. The device of claim 1 , wherein the processing unit is structured to determine a rate of the indication based on a magnitude of the difference between the location of the camera and the desired location of the camera and to control the indication device to provide the indication at the determined rate.7. The device of claim 1 , further comprising one or more sensors structured to ...

Подробнее
23-05-2019 дата публикации

IMAGE GENERATION APPARATUS AND IMAGE GENERATION METHOD

Номер: US20190158809A1
Автор: Sasaki Nobuo
Принадлежит: Sony Interactive Entertainment Inc.

A position and posture acquisition unit of an image generation apparatus acquires position information relating to a view point of a user. A viewscreen setting unit sets a viewscreen. An original image operation unit calculates a correction amount for a pixel from a parallax value of each pixel of an original image and an amount of movement of the view point such that an object looks fixed, and generates, for each of left and right view points, an image reference vector map in which an image reference vector that refers to the position before correction from coordinates of each pixel after correction is stored for each pixel. Along with this, depending upon the view point, the reference destination is set to an original image from a different view point. A display image generation unit refers to an original image on the basis of the image reference vector corresponding to each pixel of the viewscreen to determine a color value. An outputting unit outputs a display image. 1. An image generation apparatus that uses an original image acquired from a plurality of view points to generate an image that allows stereoscopic viewing of an object , comprising:an original image operation unit configured to calculate a displacement to be generated for each pixel of the original image in response to movement of a view point of a user and generate, for each of left and right view points, a vector map in which reference vectors for referring to positions before the displacement on the original image from pixel center positions after the displacement are lined up on an image plane after the displacement;a display image generation unit configured to determine, based on the reference vector at a position on the vector map corresponding to each pixel of a display image, a color value of the display pixel by referring to color values around a corresponding position on the original image, to generate display images corresponding to the left and right view points; andan outputting unit ...

Подробнее
23-05-2019 дата публикации

MULTI-LENS BASED CAPTURING APPARATUS AND METHOD

Номер: US20190158810A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A multi-lens based capturing apparatus and method are provided. The capturing apparatus includes a lens array including lenses and a sensor including sensing pixels, wherein at least a portion of sensing pixels in the sensor may generate sensing information based on light entering through different lenses in the lens array, and light incident on each sensing pixel, among the portion of the plurality of sensing pixels may correspond to different combinations of viewpoints. 1. A capturing apparatus comprising:a lens array comprising a plurality of lenses; anda sensor comprising a plurality of sensing pixels,wherein the lens array is disposed on the sensor in misalign condition, and a transformation matrix between a first matrix of pixel values generated by the sensing pixels and a second matrix of visual information of spots to be sensed is allowed to be full rank.2. The capturing apparatus of claim 1 , wherein at least a portion of the plurality of sensing pixels in the sensor is configured to generate at least a portion of the pixel values based on light entering through different lenses in the lens array.3. The capturing apparatus of claim 1 , wherein light incident on each sensing pixel corresponds to different combinations of spots to be sensed.4. The capturing apparatus of claim 1 , wherein a number of the plurality of sensing pixels in the sensor and a number of the plurality of lenses in the lens array are relatively prime.5. The capturing apparatus of claim 1 , wherein a ratio between a number of the plurality of sensing pixels and a number of the plurality of lenses is a real number.6. The capturing apparatus of claim 1 , further comprising:a processor configured to generate a captured image based on the first matrix and the transformation matrix.7. The capturing apparatus of claim 6 , wherein the processor is further configured to:generate the first matrix based on the pixel values generated by the sensing pixels,determine the second matrix based on the ...

Подробнее
21-05-2020 дата публикации

METHOD AND APPARATUS FOR SCANNING AND PRINTING A 3D OBJECT

Номер: US20200162629A1
Принадлежит: ML Netherlands C.V.

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display. Additional techniques include processing and formatting of the three-dimensional representation data to be printed by a three-dimensional printer so a three-dimensional model of the object may be formed. 125-. (canceled)26. A portable electronic device , comprising:a camera configured to acquire a plurality of image frames;one or more inertial sensors configured to acquire a plurality of inertial data, each of the plurality of inertial data being associated with an image frame of the plurality of image frames; obtain a global depth map from at least one computer-readable medium;', determine a first local depth map based on at least the first image frame and the first inertial data;', 'conditionally merge the first local depth map with the global depth map to form a combined depth map; and', 'calculate a surface based on the combined depth map,, 'for a first image frame of the plurality of image frames and a first inertial data of the plurality of inertial data, 'wherein the at least one processor is further configured to store the surface as a three-dimensional file., 'at least one ...

Подробнее
21-06-2018 дата публикации

Articulated arm coordinate measurement machine that uses a 2d camera to determine 3d coordinates of smoothly continuous edge features

Номер: US20180172428A1
Принадлежит: Faro Technologies Inc

A measurement device having a camera captures images of an object at three or more different poses. A processor determines 3D coordinates of an edge point of the object based at least in part on the captured 2D images and pose data provided by the measurement device.

Подробнее
28-05-2020 дата публикации

System and method of determining a virtual camera path

Номер: US20200168252A1
Принадлежит: Canon Inc

A computer-implemented system and method of determining a virtual camera path. The method comprises determining an action path in video data of a scene, wherein the action path includes at least two points, each of the two points defining a three-dimensional position and a time in the video data; and selecting a template for a virtual camera path, the template camera path including information defining a template camera path with respect to an associated template focus path. The method further comprises aligning the template focus path with the determined action path in the scene and transforming the template camera path based on the alignment to determine the virtual camera path.

Подробнее
13-06-2019 дата публикации

SYSTEM AND METHOD FOR DEPTH ESTIMATION USING A MOVABLE IMAGE SENSOR AND ILLUMINATION SOURCE

Номер: US20190178628A1
Автор: LANSEL Steven Paul
Принадлежит:

Depth estimation may be performed by a movable illumination unit, a movable image sensing unit having a fixed position relative to the illumination unit, a memory, and one or more processors coupled to the memory. The processors read instructions from the memory to perform operations including receiving a reference image and a non-reference image from the image sensing unit and estimating a depth of a point of interest that appears in the reference and non-reference images. The reference image is captured when the image sensing unit and the illumination unit are located at a first position. The non-reference image is captured when the image sensing unit and the illumination unit are located at a second position. The first and second positions are separated by at least a translation along an optical axis of the image sensing unit. Estimating the depth of the point is based on the translation. 1. A system , comprising:a movable illumination unit;a movable image sensing unit having a fixed position relative to the movable illumination unit;a memory; receiving a reference image from the movable image sensing unit, the reference image being captured when the movable image sensing unit and the movable illumination unit are located at a first position;', 'receiving a non-reference image from the movable image sensing unit, the non-reference image being captured when the movable image sensing unit and the movable illumination unit are located at a second position, the second position being separated from the first position by at least a translation along an optical axis of the movable image sensing unit; and', 'estimating a depth of a point of interest that appears in the reference and non-reference images based on the translation along the optical axis of the movable image sensing unit., 'one or more processors coupled to the memory and configured to read instructions from the memory to cause the system to perform operations comprising2. The system of claim 1 , wherein the ...

Подробнее
25-09-2014 дата публикации

3d image capture method with 3d preview of preview images generated by monocular camera and related electronic device thereof

Номер: US20140285637A1
Принадлежит: MediaTek Inc

A three-dimensional (3D) image capture method, employed in an electronic device with a monocular camera and a 3D display, includes at least the following steps: while the electronic device is moving, deriving a 3D preview image from a first preview image and a second preview image generated by the monocular camera, and providing 3D preview on the 3D display according to the 3D preview image, wherein at least one of the first preview image and the second preview image is generated while the electronic device is moving; and when a capture event is triggered, outputting the 3D preview image as a 3D captured image.

Подробнее
20-06-2019 дата публикации

IMAGE SENSOR

Номер: US20190191144A1
Принадлежит:

An image sensor includes a substrate, a first thin lens configured to concentrate light of a first wavelength, and including a plurality of first scatterers disposed on the substrate. The plurality of first scatterers includes a material having a refractive index greater than a refractive index of the substrate. The image sensor further includes a plurality of light-sensing cells configured to sense the light concentrated by the first thin lens. 1. An image sensor comprising:a substrate;a first thin lens configured to concentrate light of a first wavelength, and comprising a plurality of first scatterers disposed on the substrate, wherein the plurality of first scatterers comprises a material having a refractive index greater than a refractive index of the substrate; anda plurality of light-sensing cells configured to sense the light concentrated by the first thin lens.2. The image sensor of claim 1 , further comprising a housing configured to fix the first thin lens and the plurality of light-sensing cells claim 1 , while maintaining a distance between the first thin lens and the plurality of light-sensing cells.3. The image sensor of claim 1 , further comprising a low-refractive index material layer covering the plurality of first scatterers claim 1 , and comprising a material having a refractive index less than the refractive index of the material of the plurality of first scatterers claim 1 ,wherein the first thin lens further comprises a plurality of second scatterers disposed on the low-refractive index material layer and comprising a material having a refractive index greater than the refractive index of the material of the low-refractive index material layer.4. The image sensor of claim 3 , wherein each of the plurality of first scatterers and the plurality of second scatterers has a pillar shape.5. The image sensor of claim 3 , wherein each of the plurality of first scatterers and the plurality of second scatterers has a shape dimension less than the first ...

Подробнее
11-06-2020 дата публикации

Method and System of Discriminative Recovery of Three-Dimensional Digital Data of a Target of Interest in a Cluttered or Controlled Environment

Номер: US20200186773A1
Автор: Yi Steven
Принадлежит:

A method of discriminative recovery of three-dimensional digital data of a target of interest in a cluttered or controlled environment. The method uses the traditional structure from motion (SFM) 3D reconstruction method on a video of a target of interest to track and extract 3D sparse feature points and relative orientations of the video frames. Desired 3D points are filtered from the 3D sparse feature points in accordance to a user input and a digital cutting tool. Segmented images are generated by separating the target of interest from the background scene in accordance to the desired 3D points. A dense 3D reconstruction of the target of interest is generated by inputting the segmented images and the relative orientations of the video frames. 1. A method of discriminative recovery of three-dimensional digital data of a target of interest in a cluttered or controlled environment , the method comprises the steps of:(A) providing a personal computing (PC) device, and at least one remote server, wherein the remote server manages a three-dimensional (3D) reconstruction process;(B) providing a stereo 3D reconstruction for a target of interest within a background scene, wherein the stereo 3D reconstruction is compiled from a series of video frames, and wherein each video frame includes a relative orientation;(C) tracking a plurality of 3D sparse feature points of the stereo 3D reconstruction with the remote server;(D) filtering a plurality of desired 3D points from the plurality of 3D sparse feature points with the remote server;(E) generating a plurality of segmented images with the remote server by separating the target of interest from the background scene for each video frame in relation to the desired 3D points, wherein each segmented image is associated to a corresponding frame from the series of video frames; and(F) executing the 3D reconstruction process with the remote server in order to generate a dense 3D reconstruction of the target of interest by inputting ...

Подробнее
18-06-2020 дата публикации

Multi-View Aerial Imaging

Номер: US20200191568A1
Автор: Lapstun Paul
Принадлежит:

A method for capturing a multi-view set of images of an area of interest, the multi-view set of images comprising, for each of a plurality of points within the area of interest, at least one nadir image and at least four oblique images from four substantially different viewing directions, the method comprising moving a dual-scan scanning camera along a survey path above the area of interest, and capturing, within selected intervals along the survey path and using the dual-scan scanning camera, subsets of the multi-view set of images of the area of interest along pairs of opposed non-linear scan paths. 1. A method for capturing a multi-view set of images of an area of interest , the multi-view set of images comprising , for each of a plurality of points within the area of interest , at least one nadir image and at least four oblique images from four substantially different viewing directions , the method comprising moving a dual-scan scanning camera along a survey path above the area of interest , and capturing , within selected intervals along the survey path and using the dual-scan scanning camera , subsets of the multi-view set of images of the area of interest along pairs of opposed non-linear scan paths.2. The method of claim 1 , wherein the dual-scan scanning camera comprises two scanning cameras facing in substantially opposite directions claim 1 , the method comprising capturing claim 1 , within each selected interval along the survey path and using each scanning camera claim 1 , a respective subset of the multi-view set of images of the area of interest along a respective non-linear scan path claim 1 , each image in the subset having a unique viewing angle and viewing direction pair.3. The method of claim 2 , the method comprising claim 2 , for each image within the subset claim 2 , rotating a scanning mirror in an optical path of the corresponding scanning camera about a spin axis according to a spin angle claim 2 , the spin axis tilted relative to a camera ...

Подробнее
20-07-2017 дата публикации

Enhancing the resolution of three dimensional video images formed using a light field microscope

Номер: US20170205615A1
Принадлежит: Universitaet Wien

Methods and systems are provided for enhancing the imaging resolution of three dimensional imaging using light field microscopy. A first approach enhances the imaging resolution by modeling the scattering of light that occurs in an imaging target, and using the model to account for the effects of light scattering when de-convolving the light field information to retrieve the 3-dimensional (3D) volumetric information of the imaging target. A second approach enhances the imaging resolution by using a second imaging modality such as two-photon, multi-photon, or confocal excitation microscopy to determine the locations of individual neurons, and using the known neuron locations to enhance the extraction of time series signals from the light field microscopy data.

Подробнее
18-06-2020 дата публикации

IMAGING DEVICE AND OPERATING METHOD THEREOF

Номер: US20200195838A1
Принадлежит:

An imaging device including a pixel matrix and a processor is provided. The pixel matrix includes a plurality of phase detection pixels and a plurality of regular pixels. The processor performs autofocusing according to pixel data of the phase detection pixels, and determines an operating resolution of the regular pixels according to autofocused pixel data of the phase detection pixels, wherein the phase detection pixels are always-on pixels and the regular pixels are selectively turned on after the autofocusing is accomplished. 1. An imaging device , comprising:a condensing lens;an image sensor configured to detect light passing through the condensing lens and comprising a pixel matrix, wherein the pixel matrix comprises a plurality of phase detection pixel pairs and a plurality of regular pixels; and turn on the phase detection pixel pairs for autofocusing and output autofocused pixel data after completing the autofocusing,', 'divide the autofocused pixel data into a first subframe and a second subframe,', 'calculate image features of at least one of the first subframe and the second subframe, wherein the image features comprise module widths of a finder pattern, and the finder pattern has a predetermined ratio, a Harr-like feature, or a Gabor feature, and', 'determine an operating resolution of the regular pixels according to the image features calculated from at least one of the first subframe and the second subframe divided from the autofocused pixel data., 'a processor configured to'}2. The imaging device as claimed in claim 1 , wherein each of the phase detection pixel pairs comprises:a first pixel and a second pixel;a cover layer covering upon a first region of the first pixel and upon a second region of the second pixel, wherein the first region and the second region are mirror symmetrical to each other; anda microlens aligned with at least one of the first pixel and the second pixel.3. The imaging device as claimed in claim 2 , wherein the first region and ...

Подробнее
20-07-2017 дата публикации

Directed image capture

Номер: US20170208245A1
Принадлежит: Hover Inc

A process is provided for guiding a capture device (e.g., smartphone, tablet, drone, etc.) to capture a series of images of a building. Images are captured as the camera device moves around the building—taking a plurality of images (e.g., video) from multiple angles and distances. Quality of the image may be determined to prevent low quality images from being captured or to provide instructions on how to improve the quality of the image capture. The series of captured images are uploaded to an image processing system to generate a 3D building model that is returned to the user. The returned 3D building model may incorporate scaled measurements of building architectural elements and may include a dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows, doors or roofing.

Подробнее
04-07-2019 дата публикации

ROBOT-BASED 3D PICTURE SHOOTING METHOD AND SYSTEM, AND ROBOT USING THE SAME

Номер: US20190208180A1
Автор: GU Xiangnan, Xiong Youjun
Принадлежит:

The present disclosure provides a robot-based D picture shooting method and system and a robot using the same. The method includes: obtaining a distance between a photographed object and the photographing device of the robot based on a received shooting instruction; calculating an inter-axis distance based on the distance; obtaining the first picture after moving the robot for half of the inter-axis distance along the movement direction; obtaining the second picture after moving the robot for entire of the inter-axis distance from a current position along an opposite direction of the movement direction; and synthesizing the first picture and the second picture to obtain a 3D picture of the photographed object. In the process, the robot moves the photographing device according to the calculated inter-axis distance, and obtains two pictures of the left and right of the photographed object, which is not necessary to use a binocular camera. 1. A computer-implemented robot-based 3D picture shooting method for a robot with a photographing device , comprising executing on a processor the steps of:obtaining a distance between a photographed object and the photographing device of the robot based on a received shooting, instruction, in response to a movement direction of the robot being perpendicular to a shooting direction of the photographing device;calculating an inter-axis distance based on the distance between the photographed object and the photographing device, wherein the inter-axis distance comprises a distance between a position the photographing device obtaining, a first picture of the photographed object and another position the photographing device obtaining a second picture of the photographed object;obtaining the first picture after moving the robot for half of the inter-axis distance along the movement direction;obtaining the second picture after moving the robot for entire of the inter-axis distance from a current position along an opposite direction of the ...

Подробнее
20-08-2015 дата публикации

Method and apparatus for converting 2d images to 3d images

Номер: US20150237325A1

A method of converting 2D images to 3D images and system thereof is provided. According to one embodiment, the method comprises receiving a plurality of 2D images from an imaging device; obtaining motion parameters from a sensor associated with the imaging device; selecting at least two 2D images from the plurality of 2D images based on the motion parameters; determining a depth map based on the selected 2D images and the motion parameters corresponding to the selected 2D images; and generating a 3D image based on the depth map and one of the plurality of 2D images.

Подробнее
20-08-2015 дата публикации

Method for generating three-dimensional images and three-dimensional imaging device

Номер: US20150237328A1
Принадлежит: Dayu Optoelectronics Co Ltd

A method for generating three-dimensional images is provided. The method is used for a three-dimensional imaging device including a touch panel and a single lens. The method includes steps of: the touch panel showing a image including a predetermined path; using the single lens to shoot at least two different plane images when detecting a contact point on the touch panel moving along the predetermined path; and outputting the at least two different plane images shot by the single lens to make the touch panel show a three-dimensional image.

Подробнее
18-08-2016 дата публикации

Light-field camera

Номер: US20160241840A1
Автор: Sunghyun NAM, Yunhee Kim
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A light-field camera includes a main lens configured to form an image of an object, a lens configured to form, on a curved surface, additional images based on the image of the object, and an image sensor configured to function as a curved image sensor and thereby sense the additional images, at least one of the lens and the image sensor including a flat element.

Подробнее
26-08-2021 дата публикации

METHOD AND EQUIPMENT FOR CONSTRUCTING THREE-DIMENSIONAL FACE MODEL

Номер: US20210264683A1
Автор: Han Chia-Hui
Принадлежит:

A three-dimensional face model constructing method includes: obtaining front face information to establish a three-dimensional front face model and generate a front planar image corresponding to the three-dimensional front face model, wherein the front planar image includes first feature points; obtaining full face information to establish a three-dimensional full face model and generate a full face planar image corresponding to the three-dimensional full face model, wherein the three-dimensional full face model includes a specific portion and the full face planar image includes second feature points; superimposing a first block of the front plan image to a second block of the full face planar image according to the correspondence between the first feature points and the second feature points, to obtain a complete full face planar image; and generating a three-dimensional face model according to the complete full face planar image. 1. A three-dimensional face model constructing method , comprising:obtaining front face information of a user in a first time interval to establish a three-dimensional front face model and generate a front planar image corresponding to the three-dimensional front face model, wherein the front planar image includes a plurality of first feature points;obtaining full face information of the user in a second time interval to establish a three-dimensional full face model and generate a full face planar image corresponding to the three-dimensional full face model, wherein the three-dimensional full face model includes a specific portion and the full face planar image includes a plurality of second feature points;superimposing a first block of the front plan image corresponding to the specific portion of the three-dimensional full face model to a second block of the full face planar image corresponding to the specific portion of the three-dimensional full-face model according to the correspondence between the first feature points and the second ...

Подробнее
25-08-2016 дата публикации

Apparatus and method for generating three-dimensional (3d) shape of object under water

Номер: US20160249036A1

Provided is an apparatus and method for generating a three-dimensional (3D) shape of an object immersed in a liquid, in which the method may include receiving an image captured by photographing a section contour of the object immersed in a matching solution, generating a 3D shape of the object using the image, wherein the section contour may be formed according to a line laser emitted toward a surface of the object.

Подробнее
23-07-2020 дата публикации

Damage detection from multi-view visual data

Номер: US20200234488A1
Принадлежит: Fyusion Inc

A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.

Подробнее
23-07-2020 дата публикации

Damage detection from multi-view visual data

Номер: US20200236296A1
Принадлежит: Fyusion Inc

One or more images of an object, each from a respective viewpoint, may be captured at a camera at a mobile computing device. The images may be compared to reference data to identify a difference between the images and the reference data. Image capture guidance may be provided on a display screen for capturing another one or more images of the object that includes the identified difference.

Подробнее
30-08-2018 дата публикации

IMAGING SYSTEM FOR OBJECT RECOGNITION AND ASSESSMENT

Номер: US20180247417A1
Принадлежит:

A method and system for using one or more sensors configured to capture two-dimensional and/or three dimensional image data of one or more objects. In particular, the method and system combine one or more digital sensors with visible and near infrared illumination to capture visible and non-visible range spectral image data for one or more objects. The captured spectral image data can be used to separate and identify the one or more objects. Additionally, the three-dimensional image data can be used to determine a volume for each of the one or more objects. The identification and volumetric data for one or more objects can be used individually or in combination to obtain characteristics about the objects. The method and system provide the user with the ability to capture images of one or more objects and obtain related characteristics or information about each of the one or more objects. 1. A method for automated detection and processing of a plate of food for nutritional values , the method comprising:detecting, by at least one sensor, edges of the plate of food based on a depth of the plate of food in relation to other objects in a field of view;capturing, by the at least one sensor, a three-dimensional model of the plate of food;capturing, by the at least one sensor, image data for the plate of food, the image data comprising a visible light image and at least one near-infrared (NIR) image of the plate of food;transforming, by a processor, the image data into a composite image, the composite image mimicking a single image taken by a single sensor;identifying, by a processor, a food item that corresponds to the composite image;transforming, by a processor, the three-dimensional model of the identified food item into a volume for the identified food item; andcalculating, by a processor, dietary information of the identified food item based on the volume of the food item.2. The method of claim 1 , further comprising:determining an initial volume of the identified ...

Подробнее
08-09-2016 дата публикации

Method and system for 3d capture based on structure from motion with pose detection tool

Номер: US20160260250A1
Принадлежит: Individual

Method and System for 3D capture based on SFM with simplified pose detection is disclosed. This invention provides a straightforward method to directly track the camera's motion (pose detection) thereby removing a substantial portion of the computing load needed to build the 3D model from a sequence of images.

Подробнее
06-09-2018 дата публикации

SYSTEM AND METHOD FOR GENERATING COMBINED EMBEDDED MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS

Номер: US20180255290A1
Принадлежит: Fyusion, Inc.

Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view. 1. A method for automatic generation of a multi-view interactive digital representation (MIDMR) recording , the method comprising:obtaining a first MIDMR, wherein the first MIDMR includes a convex or concave motion capture using a recording device, wherein the first MIDMR is a general object MIDMR;obtaining a second MIDMR, wherein the second MIDMR is a specific feature MIDMR; andgenerating a third MIDMR from the first MIDMR and the second MIDMR, wherein the first and second MIDMRs are obtained using different capture motions, wherein the third MIDMR is a combined embedded MIDMR.2. The method of claim 1 , wherein the combined embedded MIDMR comprises the second MIDMR being embedded in the first MIDMR claim 1 , thereby forming an embedded second MIDMR.3. The method of claim 2 , wherein the third MIDMR includes a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device.4. The method of claim 3 , wherein the embedded second MIDMR is not available for viewing in the general view.5. The method of claim 4 , wherein the general ...

Подробнее
14-10-2021 дата публикации

METHODS AND SYSTEMS FOR CAMERA CALIBRATION

Номер: US20210321078A1
Принадлежит: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.

An image capture method may include obtaining two or more sets of images. The two or more sets of images may include a first image captured by a first image capture device and a second image captured by a second image capture device. The method may also include determining, for a set of images, two or more pairs of points. Each of the two or more pairs of points may include a first point in the first image and a second point in the second image, and the first point and the second point may correspond to a same object. The method may also include determining a first rotation matrix based on the pairs of points in the two or more sets of images. The first rotation matrix may be associated with a relationship between positions of the first image capture device and the second image capture device. 1. An image capture system , comprising:at least one storage device including a set of instructions; obtain two or more sets of images, wherein the two or more sets of images includes a first image captured by a first image capture device and a second image captured by a second image capture device;', 'for a set of images, determine two or more pairs of points, wherein each of the two or more pairs of points includes a first point in the first image and a second point in the second image, and the first point and the second point correspond to a same object; and', 'determine a first rotation matrix based on the pairs of points in the two or more sets of images, wherein the first rotation matrix is associated with a relationship between positions of the first image capture device and the second image capture device., 'at least one processor in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is directed to cause the system to2. The system of claim 1 , wherein the first image is captured from a first field of view of the first image capture device claim 1 , the second image is captured from a second ...

Подробнее
11-12-2014 дата публикации

Depth measurement apparatus, imaging apparatus, and method of controlling depth measurement apparatus

Номер: US20140362191A1
Автор: Akinari Takagi
Принадлежит: Canon Inc

A depth measurement apparatus including ranging pixels each having a plurality of photoelectric conversion units for receiving light fluxes that have respectively passed through first and second pupil regions, a reading unit that is shared by the plurality of photoelectric conversion units, and a control unit for controlling the ranging operation, wherein a signal charge accumulated in one of the photoelectric conversion units is output as a first signal and a second signal obtained by adding a signal accumulated in the other photoelectric conversion unit to the first signal is output, the signal charge accumulated in the other photoelectric conversion unit is acquired based on a difference between the first and second signals, and the signal charge of the photoelectric conversion unit receiving flux with a lower transmittance is read first.

Подробнее
11-12-2014 дата публикации

Method for measuring environment depth using image extraction device rotation and image extraction device thereof

Номер: US20140362192A1
Принадлежит: NATIONAL CHUNG CHENG UNIVERSITY

A measurement method for environment depth and the image extraction device thereof is revealed. First, rotate an image extraction unit and extract a plurality of images using an image extraction device according to different viewing angles of a target object. Then, use disparity information of the plurality of images and an image parameter of the image extraction unit to give a plurality pieces of depth-of-field information, which are further used for giving environment depth information. The image extraction device has an image extraction unit and a rotating member. The rotating member is connected with the base; the rotating member is connected to the image extraction unit, which is located on one side of the rotating member. The plurality of images with different viewing angles are extracted to the image extraction unit as the rotating member rotates about a rotating center to different image extracting locations.

Подробнее
04-11-2021 дата публикации

SYSTEM AND METHOD FOR GENERATING COMBINED EMBEDDED MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS

Номер: US20210344891A1
Принадлежит: Fyusion, Inc.

Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view. 1. A method for automatic generation of a multi-view interactive digital media representation (MIDMR) recording , the method comprising:obtaining a first MIDMR;obtaining a second MIDMR; andgenerating a third MIDMR from the first MIDMR and the second MIDMR, wherein the first and second MIDMRs are obtained using different capture motions, wherein the third MIDMR is a combined embedded MIDMR, wherein the third MIDMR provides a three dimensional view of a first object MIDMR as well as an embedded three dimensional view of a second object MIDMR,wherein the first and second MIDMRS are generated by stitching together two dimensional images, and wherein the third MIDMR is generated by combining the first and second MIDMR, and wherein the first, second, and third MIDMRs each provide a three-dimensional view of content without rendering and/or storing an actual three-dimensional model.2. The method of claim 1 , wherein the combined embedded MIDMR comprises the second MIDMR being embedded in the first MIDMR claim 1 , thereby forming an embedded second MIDMR.3. The method of ...

Подробнее
11-11-2021 дата публикации

ACTUATOR ASSEMBLIES AND METHODS OF CONTROLLING THE SAME

Номер: US20210348714A1
Принадлежит:

An actuation assembly comprising: a support structure; a movable element movable relative to the support structure, the movable element having a principal axis; and an actuator arrangement for driving movement of the movable element with respect to the support structure, wherein said movement includes rotational movement of the movable element about an axis which is perpendicular to said principal axis and does not pass through the centre of the movable element, and wherein said movement also includes translational movement of the movable element in a direction perpendicular to the principal axis. The actuation assembly may be used to perform optical image stabilisation or to improve the performance of a 3D sensing system. 1. An actuation assembly comprising:a support structure;a movable element movable relative to the support structure, the movable element having a principal axis; andan actuator arrangement for driving movement of the movable element with respect to the support structure, wherein said movement includes rotational movement of the movable element about an axis which is perpendicular to said principal axis and does not pass through the centre of the movable element, and wherein said movement also includes translational movement of the movable element in a direction perpendicular to the principal axis.2. The actuation assembly according to claim further comprising a control circuit configured to:control the actuator arrangement to drive movement of the movable element relative to the support structure;translate the movable element in a direction perpendicular to the principal axis; androtate the movable element about the axis perpendicular to the principal axis that does not pass through the center of the movable element.3. (canceled)4. The actuation assembly as claimed in claim 2 , wherein the control circuit is arranged to rotate the movable element after the translation.5. The actuation assembly according to claim 1 , further comprising a suspension ...

Подробнее
20-09-2018 дата публикации

DEVICE AND METHOD FOR OPTICAL ACQUISITION OF THREE-DIMENSIONAL SURFACE GEOMETRIES

Номер: US20180270472A1
Автор: JESENKO Juergen
Принадлежит:

In a device and a method for optical acquisition of three-dimensional surface geometries with a handpiece (), with the handpiece () having one or more means () for output of status reports, at least one means () for output of status reports is a means () for generating oscillations that outputs haptically perceptible status reports. 111222. Device for optical acquisition of three-dimensional surface geometries with a handpiece () , the handpiece () having one or more means () for output of status reports , wherein at least one means () for output of status reports is a means () for generating oscillations.212. Device according to claim 1 , wherein in the handpiece () claim 1 , two or more means () for generating oscillations are arranged spatially offset claim 1 , in particular diagonally to one another and/or opposite one another.3122. Method for optical acquisition of three-dimensional surface geometries with a handpiece claim 1 , the handpiece () having one or more means () for output of status reports claim 1 , wherein at least one means () outputs status reports as haptic signals by oscillations.4. Method according to claim 3 , wherein the haptic signals are output when the quality of recording exceeds or falls below a given threshold value.52. Method according to claim 3 , wherein different spatially offset means () for output of haptic signals oscillate with different intensity and/or staggered in time in order to output direction information.6. Method according to claim 3 , wherein acquisition takes place in different operation states and wherein when the operation state changes claim 3 , the output haptic signal changes.71. Method according to claim 3 , wherein acquisition takes place in at least two defined physical regions in which the handpiece () can be located claim 3 , and wherein the output haptic signal changes when the region in which the handpiece is located changes.8. Method according to claim 3 , wherein the handpiece executes a movement during ...

Подробнее
13-08-2020 дата публикации

DIRECTED IMAGE CAPTURE

Номер: US20200260000A1
Принадлежит: HOVER, INC.

Systems and methods are disclosed for directed image capture of a subject of interest, such as a home. Directed image capture can produce higher quality images such as more centrally located within a display and/or viewfinder of an image capture device, higher quality images have greater value for subsequent uses of captured images such as for information extraction or model reconstruction. Graphical guide(s) facilitate content placement for certain positions and quality assessments for the content of interest can be calculated such as for pixel distance of the content of interest to a centroid of the display or viewfinder, or the effect of obscuring objects. Quality assessments can further include instructions for improving the quality of the image capture for the content of interest. 1. A method of directed capture of building imagery , the method comprising:displaying, on an image capture device display, a building of interest subject from a first camera position;overlaying a graphical guide associated with the building of interest subject at the first camera position;receiving a first quality assessment of the building of interest subject in association with the graphical guide; andcapturing a first image of the building of interest subject based on the quality assessment.2. The method of claim 1 , further comprising:displaying, on an image capture device display, a building of interest subject from a second camera position;overlaying a second graphical guide associated with the building of interest subject at the second camera position;receiving a second quality assessment of the building of interest subject in association with the second graphical guide; andcapturing a second image of the building of interest subject based on the second quality assessment.3. The method of further comprising creating a multidimensional building model from at least the first and second captured images.4. The method of claim 1 , wherein receiving the first quality assessment ...

Подробнее
27-08-2020 дата публикации

IMAGING METHOD AND APPARATUS USING CIRCULARLY POLARIZED LIGHT

Номер: US20200275078A1
Автор: PAU Stanley K.H.
Принадлежит:

A three-dimensional imaging system includes at least one light source, a circular or elliptical polarization beamsplitter, a detector arrangement and an image processor. The light source is configured to provide light in a first circular or elliptical polarization state onto an object to be imaged. The circular or elliptical polarization beamsplitter is arranged to spatially separate the light reflected from an object into a first reflected portion in the first polarization state and a second reflected portion in the second polarization state. The first and second circular or elliptical polarization states are orthogonal to one another. The detector arrangement detects at least the first reflected portion of the light and the image processor is configured to generate image information from the detected first reflected portion. 1. A method of obtaining a three-dimensional image of an object , comprising:directing light in a first circular or elliptical polarization state onto an object to be imaged;receiving a reflected portion of the light from the object;spatially separating the reflected portion of the light into a first reflected portion in the first polarization state and a second reflected portion in a second polarization state that is orthogonal to the first circular or elliptical polarization state;detecting at least the first reflected portion of the light; andgenerating image information from the detected first reflected portion.2. The method of claim 1 , wherein detecting at least the first reflected portion of the light includes detecting the first and second reflected portions of the light and generating image information includes generating image information from the detected first and second reflected portions of the light.3. The method of claim 1 , wherein spatially separating the reflected portion of the light includes spatially separating the reflected portion of the light using a circular or elliptical polarization beamsplitter.4. The method of ...

Подробнее
04-10-2018 дата публикации

ASYMMETRIC ANGULAR RESPONSE PIXELS FOR SINGL SENSOR STEREO

Номер: US20180288398A1

Depth sensing imaging pixels include pairs of left and right pixels forming an asymmetrical angular response to incident light. A single microlens is positioned above each pair of left and right pixels. Each microlens spans across each of the pairs of pixels in a horizontal direction. Each microlens has a length that is substantially twice the length of either the left or right pixel in the horizontal direction; and each microlens has a width that is substantially the same as a width of either the left or right pixel in a vertical direction. The horizontal and vertical directions are horizontal and vertical directions of a planar image array. A light pipe in each pixel is used to improve light concentration and reduce cross talk. 1. An image sensor comprising:a pixel pair that includes a first pixel and a second pixel, wherein the first pixel and the second pixel have asymmetrical angular responses to incident light and wherein the first and second pixels are covered by color filter element material of a single color;a microlens that spans the pixel pair, wherein the color filter element material is interposed between the microlens and the first and second pixels; and obtain an image depth signal by using subtraction to determine a difference between an output signal from the first pixel of the pixel pair and an output signal from the second pixel of the pixel pair; and', 'determine a distance to an imaged object based on the image depth signal., 'image processing circuitry configured to2. The image sensor defined in claim 1 , further comprising:additional pixels having symmetrical angular responses to the incident light.3. The image sensor defined in claim 1 , wherein the first and second pixels are positioned in the same row of pixels.4. The image sensor defined in claim 1 , wherein the microlens has a width and a length that is longer than the width.5. The image sensor defined in claim 1 , wherein the microlens has a width and a length that is substantially twice ...

Подробнее
05-11-2015 дата публикации

Imaging systems with phase detection pixels

Номер: US20150319420A1
Принадлежит: Semiconductor Components Industries LLC

An image sensor may include phase detection pixels that receive and convert incident light into pixel signals. Processing circuitry may use pixel signals from the phase detection pixels to determine an amount by which image sensor optics should be adjusted during automatic focusing operations. Phase detection pixels may include photodiodes with asymmetric angular responses. For example, the center of a photodiode in a phase detection pixel may be offset from the optical center of the microlens that covers that photodiode. A group of two, three, four, or more than four phase detection pixels may be clustered together and covered by a single microlens. Groups of these clusters may be arranged consecutively in a line. Phase data may be gathered using all of the phase detection pixels in the array, and image processing circuitry may determine which phase data to use after the data has been gathered.

Подробнее
25-10-2018 дата публикации

Image-recording device, Balloon for operation with an image-recording device, method for operating an image-recording device, and control program for an image-recording device

Номер: US20180303330A1
Принадлежит:

The invention relates to an image-recording device (), comprising a scanning-head guide () for moving the scanning head () across a scanning region () of a cavity, which scanning region () extends around the scanning head (). A planar or film-like, elastically extensible material extends between the scanning head () and the scanning region (), which material can be pressed against the scanning region () by the application of overpressure in the manner of a balloon (). The scanning-head guide () extends through the balloon connection () of the balloon () to the scanning head (). In particular, a control device () determines the shape of the cavity () against which the material rests from the deformation of the material while the overpressure is present. 1. An image-recording device comprising{'b': '20', 'a scanning head (),'}{'b': '18', 'a scanning head guide () for moving the scanning head across a scanning region of a cavity, which scanning region extends around the scanning head,'}{'b': '12', 'a balloon (),'}{'b': 20', '42', '42', '12, 'wherein the balloon comprises a planar or film-like, elastically extensible material that extends between the scanning head () and the scanning region (), which elastically extensible material can be pressed against the scanning region () in a balloon ()-type manner using overpressure, and'}{'b': 18', '16', '12', '20', '32', '40, 'wherein the scanning head guide () passes through a balloon connection () of the balloon () to the scanning head (), and wherein a control device () determines the shape of the cavity () against which the elastically extensible material rests from the deformation of the elastically extensible material in the presence of overpressure of the elastically extensible material.'}2. The image-recording device according to claim 1 ,{'b': 26', '12, 'wherein the elastically extensible material is provided with a reference pattern () that, in relation to the balloon (), is applied on the interior or exterior side of ...

Подробнее
25-10-2018 дата публикации

Turn Table for Photographing and Image Photgraphing System Using Same

Номер: US20180309976A1
Принадлежит: ORANGEMONKIE KOREA, INC.

A technology is provided in which a user generates a 3-D image for a photographing target only by a simple device without the rent of a studio or the use of a professional product and easily controls the generation of the 3-D image. The photographing turn table is installed in one area of a studio device having one area in which a photographing target is located and having an open one surface to allow image photographing through the open one surface. The photographing turn table includes a lower body, which is fixedly located in the one area and provided at a part of one surface thereof located in an opposite direction to a direction of a photographing device to emit light from the outer surface toward an inner wall which is included in the one area to form a background of the studio device, an upper body coupled to a top surface of the lower body rotatably relatively to the lower body, and a rotation module which include a rotation unit to rotate the upper body relatively to the lower body, and a rotation control device including a communication function to receive a control command from an external device and to control driving of the rotation unit according to the control command. 1. A photographing turn table , which is installed in one area of a studio device which allows image-photographing for a photographing target located in the one area , the photographing turn table comprising:a lower body fixedly located at the one area;a light emitting unit installed at a part of one surface of the lower body located in an opposite direction to a direction of a photographing device to emit light from the outer surface toward an inner wall which is included in the one area to form a background of the studio device;an upper body coupled to a top surface of the lower body rotatably relatively to the lower body; anda rotation module which include a rotation unit to rotate the upper body relatively to the lower body, and a rotation control device including a communication ...

Подробнее
03-10-2019 дата публикации

SYSTEM AND METHOD OF AUTOMATIC ROOM SEGMENTATION FOR TWO-DIMENSIONAL FLOORPLAN ANNOTATION

Номер: US20190304150A1
Принадлежит:

A system that includes a coordinate measurement scanner having a first image sensor, one or more processors coupled to the scanner for generating a 2D image of the environment, a portable computing device having a second image sensor coupled to the one or more processors, and a mapping system. The one or more processors correlate a location captured by a first image from the portable computing device with the location in the 2D image of the environment in response to the first image being acquired by the second image sensor. The system further includes a mapping system configured to: generate a 2D map based on the 2D image of the environment, apply image recognition to the first image to identify and label an object in the first image, and update the 2D map based at least in part on the label of the object in the first image. 1. A system of generating a two-dimensional (2D) map of an environment , the system comprising:a coordinate measurement scanner comprising a light source, a first image sensor and a controller, the light source emits a beam of light to illuminate object points in the environment, the first image sensor is arranged to receive light reflected from the object points, the controller being operable to determine a distance value to at least one of the object points;one or more processors operably coupled to the scanner, the one or more processors being responsive to executable instructions for generating a 2D image of the environment in response to an activation signal from an operator and based at least in part on the distance value;a portable computing device having a second image sensor, the portable computing device being coupled for communication to the one or more processors, wherein the one or more processors are responsive to correlate a location captured by a first image from the portable computing device with the location in the 2D image of the environment in response to the first image being acquired by the second image sensor; and ...

Подробнее
10-10-2019 дата публикации

ILLUMINATION DEVICE AND DISPLAY UNIT

Номер: US20190310537A1
Принадлежит:

An illumination device includes a light source section, an optical element, and a driver. The light source section includes a laser light source. The optical element includes a periodic structure, and is disposed in an optical path of light emitted from the light source section. The driver vibrates the optical element to cause a vibration direction to be inclined to a periodic direction of the periodic structure of the optical element. 1. An illumination device comprising:a light source section including a laser light source;an optical element including a periodic structure, and disposed in an optical path of light emitted from the light source section; anda driver that vibrates the optical element to cause a vibration direction to be inclined to a periodic direction of the periodic structure of the optical element.2. The illumination device according to claim 1 , whereinthe optical element includes the periodic structure in a first periodic direction and a second periodic direction that are different from each other, andan angle formed by the first periodic direction and the vibration direction and an angle formed by the second periodic direction and the vibration direction are asymmetrical about the vibration direction.3. The illumination device according to claim 2 , whereinthe optical element includes a light incident surface and a light output surface,the optical element includes the periodic structure in the first periodic direction on the light incident surface, andthe optical element includes the periodic structure in the second periodic direction on the light output surface.4. The illumination device according to claim 2 , whereinthe optical element includes a light incident surface and a light output surface, andthe optical element includes the periodic structure in the first periodic direction and the second periodic direction on one of the light incident surface and the light output surface.5. The illumination device according to claim 2 , whereinthe ...

Подробнее
15-11-2018 дата публикации

DIRECTED IMAGE CAPTURE

Номер: US20180332217A1
Принадлежит: Hover Inc.

A process is provided for graphically guiding a user of a capture device (e.g., smartphone) to more accurately capture a series of images of a building. Images are captured as the picture taker moves around the building—taking a plurality (e.g., 4-16) of images from multiple angles and distances. Before capturing an image, a quality of the image may be determined to prevent low quality images from being captured or to provide instructions on how to improve the quality of the image capture. The series of captured images are uploaded to an image processing system to generate a 3D building model that is returned to the user. The returned 3D building model may incorporate scaled measurements of building architectural elements and may include a dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows, doors or roofing. 1. A method of directed capture of building imagery , the method comprises:retrieving, from an image capture device, an image, the image including at least a portion of a subject building; calculating a percentage of façade pixels present within the image, wherein the façade includes any side of the subject building;', 'determining a distance of the façade pixels from a centroid of the image;', 'determining obfuscated façade areas;', 'outputting the image to a quality classifier, wherein the quality classifier calculates a ranked quality of the image based on or more of: the percentage of façade pixels present within the image, the distance of the façade pixels from a centroid of the image, or the obfuscated façade areas; and, 'determining if the image is usable in creating a multi-dimensional building model byfeeding an indication of the ranked quality to the image capture device along with instructions based on the indication to direct one or more additional image captures.2. The method of claim 1 , wherein the image is displayed on a viewfinder of the image capture device.3. ...

Подробнее
29-10-2020 дата публикации

System and method for real-time camera tracking to form a composite image

Номер: US20200344425A1
Автор: Matthew Walker
Принадлежит: Individual

A system and method for tracking the movement of a recording device to form a composite image is provided. The system has a a user device with a sensor array capturing motion data and velocity vector data of the recording device when the recording device is in motion, an attachment member for coupling the user device to the motion capturing device, and a server with program modules. The program modules described are a calibration module for calibrating a position of the user device relative to a position of a lens of the recording device, a recorder module for receiving the motion data and velocity vector data from the sensor array; and a conversion module for combining the position of the user device relative to the lens of the recording device, and the motion data and velocity vector data and transforming the data into a file that is usable by a compositing suite, a three-dimensional application, or both.

Подробнее
14-11-2019 дата публикации

METHOD FOR PROVIDING INTERFACE FOR ACQUIRING IMAGE OF SUBJECT, AND ELECTRONIC DEVICE

Номер: US20190349562A1
Принадлежит:

An electronic device includes a sensor that detects movement of the electronic apparatus, a camera photographing an external object to the apparatus, a display outputting an image corresponding to the external object to the apparatus, and a processor being electrically connected to the display. The processor obtains a first image of a part of the external object to the apparatus through the camera, wherein the obtaining of the first image includes identifying a first position of the electronic apparatus with respect to the external object to the apparatus using the sensor, determines a movement path of the electronic apparatus from the first position to a second position capable of obtaining a second image to generate a stereoscopic image of the external object to the apparatus with the first image, and outputs a virtual path corresponding to the movement path through the display. 1. An electronic apparatus comprising:a sensor configured to detect movement of the electronic apparatus;a camera configured to photograph an external object to the apparatus;a display configured to output an image corresponding to the external object to the apparatus; anda processor configured to be electrically connected to the display,wherein the processor is configured to:obtain a first image of a part of the external object to the apparatus through the camera, wherein the obtaining of the first image includes identifying a first position of the electronic apparatus with respect to the external object to the apparatus using the sensor;determine a movement path of the electronic apparatus from the first position to a second position capable of obtaining a second image to generate a stereoscopic image of the external object to the apparatus with the first image; andoutput a virtual path corresponding to the movement path through the display.2. The electronic apparatus of claim 1 , wherein the processor is configured to:determine the virtual path based on the movement path and the ...

Подробнее
20-12-2018 дата публикации

A METHOD FOR CREATING A STEREOSCOPIC IMAGE SEQUENCE

Номер: US20180367783A1
Принадлежит: CREATIVE TECHNOLOGY LTD

There is provided a method for creating a stereoscopic image sequence. The method can include capturing a sequence of static images and forming a plurality of image pairs. Each image pair can include a first image and a second image selected from the sequence of static images. Selection of the first image can be done in a manner so that the image pairs are formed a spatially coherent manner. The stereoscopic image sequence can be created based on the image pairs. Creating the stereoscopic image sequence can, for example, relate to producing a stereoscopic video. 1. A method for creating a stereoscopic image sequence , the method comprising:capturing a sequence of static images; andforming a plurality of image pairs, each image pair comprising a first image and a second image selected from the sequence of static images, selection of the first image being done in a manner so that the image pairs are formed in a spatially coherent manner,wherein each image pair is associable with a stereo-base that is based on separation between the first and second images, the stereo-base being variable, andwherein the stereoscopic image sequence is created based on the image pairs.2. (canceled)3. The method of claim 1 , wherein the stereo-base is variable based on selection of the first and second images of an image pair.4. The method of claim 3 , wherein selection of the first and second images of an image pair is based on at least one of manual based selection and automatic based selection.5. The method of claim 4 , wherein manual based selection is by manner of presenting the sequence of static images on a display screen for user selection of the first and second images of an image pair.6. The method of claim 4 , wherein automatic based selection is based on at least one of variance in focal length associated with at least one static image claim 4 , salient object detection and characteristics associated with the static images.7. The method of claim 1 , wherein the stereoscopic ...

Подробнее
16-05-2017 дата публикации

Computer vision algorithm for capturing and refocusing imagery

Номер: US9654761B1
Принадлежит: Google LLC

Systems and methods for the generation of depth data for a scene using images captured by a camera-enabled mobile device are provided. According to a particular implementation of the present disclosure, a reference image can be captured of a scene with an image capture device, such as an image capture device integrated with a camera-enabled mobile device. A short video or sequence of images can then be captured from multiple different poses relative to the reference scene. The captured image and video can then be processed using computer vision techniques to produce an image with associated depth data, such as an RGBZ image.

Подробнее
10-11-2022 дата публикации

Image Sensor and Image Apparatus

Номер: US20220360759A1
Принадлежит:

An image capturing element according to the present disclosure includes a pixel array formed by a plurality of pixels arranged in an array on a substrate, each of the plurality of pixels including a photoelectric conversion element, a transparent layer formed on the pixel array, and a spectroscopic element array formed by a plurality of spectroscopic elements arranged in an array, and each of the plurality of spectroscopic elements is at a position corresponding to one of the plurality of spectroscopic elements inside or on the transparent layer. Each of the plurality of spectroscopic elements includes a plurality of microstructures formed from a material having a refractive index higher than a refractive index of the transparent layer. The plurality of microstructures have a microstructure pattern. Each of the plurality of spectroscopic elements separates incident light into deflected light beams having different propagation directions according to the wavelength and emits the deflected light beams. 1. An image capturing element , comprising:a pixel array formed by a plurality of pixels arranged in an array on a substrate, each of the plurality of pixels including a photoelectric conversion element;a transparent layer formed on the pixel array; anda spectroscopic element array formed by a plurality of spectroscopic elements arranged in an array, each of the plurality of spectroscopic elements being at a position corresponding to one of the plurality of pixels inside or on the transparent layer, whereineach of the plurality of spectroscopic elements includes a plurality of microstructures formed from a material having a refractive index higher than a refractive index of the transparent layer, the plurality of microstructures have a microstructure pattern, and each of the plurality of spectroscopic elements separates incident light into deflected light beams having different propagation directions according to a wavelength, and emits the deflected light beams.2. The ...

Подробнее
18-02-2015 дата публикации

Imaging device

Номер: JP5672989B2
Автор: 真一郎 田尻
Принадлежит: Sony Corp

Подробнее
21-02-1992 дата публикации

METHOD OF TEMPORALLY CREATING COUPLES OF STEREOSCOPIC IMAGES, AND DEVICE USING SUCH A METHOD.

Номер: FR2654291B1
Автор: Pochet Roger
Принадлежит: Pochet Roger

Подробнее
09-03-2021 дата публикации

3차원 스캐너 장치

Номер: KR102224166B1
Автор: 장민호, 장지웅
Принадлежит: 주식회사 메디트

본 발명은 3차원 스캐너 장치에 관한 것으로서, 특히, 스캔 공간 상에서 제1회전축을 중심으로 스윙 회동 가능하게 구비되고, 스캔모듈이 구비된 제1프레임, 상기 제1프레임과 연동하여 스윙 회동되되, 상기 제1회전축과 동일 이격거리를 가지도록 상기 제1회전축에 대하여 평행되게 이격된 제2회전축이 형성하는 스윙 궤적을 이루면서 회동되고, 측정 대상물이 안착되는 제2프레임, 일단은 상기 제1프레임과 연동하도록 상기 제1회전축 측에 연결되고, 타단은 상기 제2프레임과 상대 회전 가능하도록 상기 제2회전축 측에 연결되는 연결 프레임 및 상기 연결 프레임을 매개로 상기 제1프레임과 상기 제2프레임을 연결하도록 구비되고, 상기 제1프레임의 스윙 회동에 대하여 연동되는 상기 제2프레임을 수평 유지시키는 연동부를 포함함으로써, 측정 대상물의 스캔 과정 동안 측정 대상물의 유격 및 낙하를 방지하여 보다 정밀한 스캔이 가능한 이점을 제공한다.

Подробнее
24-08-2004 дата публикации

Method and apparatus for tracking a medical instrument based on image registration

Номер: US6782287B2

An apparatus, method and system for tracking a medical instrument, as it is moved in an operating space to a patient target site in the space, by constructing a composite, 3-D rendition of at least a part of the operating space based on an algorithm that registers pre-operative 3-D diagnostic scans of the operating space with real-time, stereo x-ray or radiograph images of the operating space. The invention has particular utility in tracking a flexible medical instrument and/or a medical instrument that moves inside the patient's body and is not visible to the surgeon.

Подробнее
07-07-2011 дата публикации

Image processing apparatus, image capturing apparatus, image processing method, and program

Номер: JP2011135246A
Принадлежит: Sony Corp

【課題】複数の画像から切り出した短冊領域を連結して3次元画像表示に適用する左目用合成画像と右目用合成画像を生成する装置および方法を提供する。 【解決手段】複数画像から切り出した短冊領域を連結して3次元画像表示用の左目用合成画像と右目用合成画像を生成する。画像合成部が視点の異なる左目用合成画像と右目用合成画像を生成可能とするための左および右目用の画像短冊の設定位置の許容範囲をメモリから取得または算出する。この許容範囲に左と右目用の短冊を設定する。具体的には、左目用画像短冊と前記右目用画像短冊の設定領域に重なり領域が発生せず、画像用メモリの格納範囲内となるように短冊設定位置を決定する。これらの短冊設定処理により確実に3次元画像表示処理に適用可能な左目用合成画像と右目用合成画像を生成することができる。 【選択図】図11

Подробнее
22-01-2014 дата публикации

Depth map generation techniques for conversion of 2d video data to 3d video data

Номер: KR101354387B1
Принадлежит: 퀄컴 인코포레이티드

본 개시물은 비디오 프레임들 또는 비디오 프레임들의 슬라이스들과 같은 비디오 유닛들에 대한 깊이 맵들을 생성하기 위한 기술들을 설명한다. 이 기술들은 2 차원 (2D) 비디오를 3 차원 (3D) 비디오로 컨버팅하기 위하여 비디오 인코더에 의해 수행될 수도 있다. 이 기술들은 대안적으로, 수신된 2D 비디오를 3D 비디오로 컨버팅하기 위해 비디오 디코더에 의해 수행될 수도 있다. 이 기술들은 깊이 맵 생성 프로세스에서 모션과 컬러 고려 사항들의 조합을 이용할 수도 있다. This disclosure describes techniques for generating depth maps for video units, such as video frames or slices of video frames. These techniques may be performed by a video encoder to convert two-dimensional (2D) video into three-dimensional (3D) video. These techniques may alternatively be performed by a video decoder to convert the received 2D video into 3D video. These techniques may use a combination of motion and color considerations in the depth map generation process.

Подробнее
17-08-2011 дата публикации

Image processing apparatus, imaging apparatus, image processing method, and program

Номер: CN102158719A
Принадлежит: Sony Corp

一种图像处理装置包括评估作为三维图像的合成图像的适合度的图像评估单元。所述图像评估单元通过由从作为合成图像的块单元的运动矢量的块运动矢量减去指示整个图像的移动的全局运动矢量计算的块对应差矢量的分析,执行评估作为三维图像的合成图像的适合度的处理,比较预定阈值和具有块对应差矢量的块的块面积和移动量相加值之一,以及当块面积等于或大于预定面积阈值时或当移动量相加值等于或大于预定移动量阈值时,执行确定所述合成图像不适合作为三维图像的处理。

Подробнее
07-05-2019 дата публикации

Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features

Номер: US10281259B2
Принадлежит: Faro Technologies Inc

A measurement device having a camera captures images of an object at three or more different poses. A processor determines 3D coordinates of an edge point of the object based at least in part on the captured 2D images and pose data provided by the measurement device.

Подробнее