Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 8585. Отображено 200.
27-05-2016 дата публикации

СПОСОБ ОБРАБОТКИ ИЗОБРАЖЕНИЯ ДЛЯ ОПРЕДЕЛЕНИЯ ГЛУБИНЫ ЛОКАЛИЗАЦИИ ФОКУСА РЕФРАКЦИОННОГО ЛАЗЕРА

Номер: RU2585425C2
Принадлежит: УЭЙВЛАЙТ ГМБХ (DE)

Группа изобретений относится к медицинской технике, а именно к лазерному аппарату, системе и способу определения глубины локализации фокальной точки лазерного пучка. К лазерному аппарату может подсоединяться интерфейсное устройство, содержащее уплощающий элемент, имеющий переднюю и заднюю поверхности. Лазерный пучок, имеющий заданный профиль, фокусируется через уплощающий элемент в фокальную точку и детектируется составное изображение, образованное паразитным отражением, соответствующим отражению от передней поверхности уплощающего элемента и нормальным отражением, соответствующим отражению от задней поверхности этого элемента. Затем производится отфильтровывание паразитного отражения от составного изображения. Используя сохраненное нормальное отражение, можно определить глубину локализации фокальной точки лазерного пучка. 3 н. и 10 з.п. ф-лы, 12 ил.

Подробнее
11-01-2019 дата публикации

СИСТЕМА И СПОСОБ ОТСЛЕЖИВАНИЯ ДЛЯ ИСПОЛЬЗОВАНИЯ ПРИ НАБЛЮДЕНИИ ЗА ОБОРУДОВАНИЕМ ПАРКА РАЗВЛЕЧЕНИЙ

Номер: RU2676874C2

Изобретение относится к системе отслеживания с динамическим отношением «сигнал-шум». Технический результат заключается в повышении надежности системы отслеживания в окружении вне помещений и в присутствии других источников электромагнитного излучения. Такой результат достигается за счет того, что система отслеживания включает в себя излучатель, выполненный с возможностью испускания электромагнитного излучения в области, детектор, выполненный с возможностью обнаружения электромагнитного излучения, отражаемого обратно от транспортных средств в области, и модуль управления, выполненный с возможностью оценки сигналов из детектора, чтобы наблюдать за оборудованием парка развлечений, для определения того, износилось ли или сдвинулось ли оборудование. 3 н. и 17 з.п. ф-лы, 20 ил.

Подробнее
23-04-2018 дата публикации

УСТАНОВКА НЕРАЗРУШАЮЩЕГО КОНТРОЛЯ

Номер: RU2651613C1

Изобретение относится к области контроля качества изделий и касается установки неразрушающего контроля. Установка предназначена для неразрушающего контроля деталей газотурбинного двигателя и выполнена с возможностью проведения контроля места соединения между основным материалом, сформированным из армированного волокном материала, и металлическим соединяемым материалом. Установка включает в себя движущий механизм, перемещающий детали газотурбинного двигателя, источник лазерного пучка, электронно-оптический преобразователь инфракрасного излучения в видимое и устройство управления и арифметической обработки. Устройство управления и арифметической обработки выполнено с возможностью хранения данных формы детали газотурбинного двигателя, управления движущим механизмом таким образом, что лазерный пучок испускается на место соединения и получения результата, показывающего состояние места соединения на основе данных формирования изображения, получаемых посредством электронно-оптического преобразователя ...

Подробнее
28-12-2020 дата публикации

ОЦЕНКА ЭЛЕКТРОМЕХАНИЧЕСКИХ ПАРАМЕТРОВ С ПОМОЩЬЮ ЦИФРОВЫХ ИЗОБРАЖЕНИЙ И МЕТОДОВ ФИЛЬТРАЦИИ, ОСНОВАННЫХ НА ИСПОЛЬЗОВАНИИ МОДЕЛИ

Номер: RU2739711C1

Изобретение относится к области вычислительной техники. Технический результат заключается в повышении точности оценочных параметров в алгоритме оценки. Способ включает обеспечение фотореалистичного виртуального объекта физического объекта, выполнение этапа измерения, выполнение этапа оценки, причем этап оценки включает приложение внешних возбуждений к фотореалистичному виртуальному объекту для создания результатов измерений фотореалистичного виртуального поля и сравнение результатов измерений фотореалистичного виртуального поля с результатами измерения физического поля и, таким образом, определение физического объекта, отличается тем, что фотореалистичный виртуальный объект воспроизводит интенсивности пикселей изображения физического объекта, и по меньшей мере одно двумерное изображение представляет собой по меньшей мере одно двумерное изображение для предоставления интенсивностей пикселей изображения, причем указанное сравнение включает сравнение интенсивностей пикселей измерений фотореалистичного ...

Подробнее
21-02-2018 дата публикации

УСТРОЙСТВО И СПОСОБ РАСПОЗНАВАНИЯ ПЕРЕГИБА БУМАЖНЫХ ДЕНЕГ

Номер: RU2645591C1

Изобретение относится к устройству и способу для идентификации перегиба бумажных денег. Технический результат заключается в повышении надежности распознавания бумажных денег. В способе и устройстве для распознавания перегиба бумажных денег используют лазерный источник, прямоугольную решетку, матричный фотоэлектрический датчик, отображающую линзовую группу и модуль обработки сигналов, причем лазерный источник используется для испускания лазерного света; прямоугольная решетка располагается выше лазерного источника и используется для модуляции на поверхности бумажных денег лазерного света в полосы, которые изменяются от ярких до темных в соответствии с определенными правилами; матричный фотоэлектрический датчик используется для получения изображений. При этом переводят регистрацию того, действительно ли имеется перегиб на бумажных деньгах, в регистрацию того, имеется ли искривление характерных изображений полос решетки для проектируемой решетки, и, тем самым, эффективно избегается влияние ...

Подробнее
21-10-2010 дата публикации

Messvorrichtung

Номер: DE102010000075A1
Принадлежит:

Eine Vorrichtung zum dreidimensionalen Messen umfasst eine Bestrahlungsvorrichtung, die ein Lichtmuster mit einer streifenförmigen Intensitätsverteilung auf ein Messobjekt strahlt, eine Abbildungsvorrichtung, die ein von dem Messobjekt, das mit dem Lichtmuster bestrahlt wird, reflektiertes Licht abbildet, um Bilddaten zu gewinnen, eine Bildverarbeitungsvorrichtung, die eine Höhenmessung an verschiedenen Koordinatenpositionen des Messobjekts auf der Grundlage der durch die Abbildungsvorrichtung abgebildeten Bilddaten durchführt, und eine Korrekturberechnungsvorrichtung, die eine Verzerrung bezüglich durch das Bildverarbeitungsmittel gemessener Höhendaten und Koordinatendaten eines Messpunkts auf dem Messobjekt, die aufgrund eines Bildwinkels einer Linse der Abbildungsvorrichtung auftritt, auf der Grundlage von wenigstens einer Höheninformation der Abbildungsvorrichtung und einer Bestrahlungswinkelinformation des auf das Messobjekt gestrahlten Lichtmusters korrigiert.

Подробнее
26-02-1998 дата публикации

Procedure for determining contour of 3-dimensional surface

Номер: DE0019720160A1
Принадлежит:

A procedure for determining the contour of the outer surface of a three-dimensional object is based on the projection of a series of strip-patterns indexed by Gray code and for each pixel element of the images so produced a succession of Gray code intensities is obtained. A complementary series of intensities is established by projecting a further sequence of strip-patterns which incorporate a phase shift and the two series collectively identified by reference number enable the absolute phase value to be determined by signal processing and a suitable computer algorithm.

Подробнее
08-05-2014 дата публикации

Kalibrierungsvorrichtung und Verfahren zur Kalibrierung einer dentalen Kamera

Номер: DE102012220048A1
Принадлежит:

Die Erfindung betrifft ein Verfahren und eine Kalibrierungsvorrichtung zur Kalibrierung einer dentalen Kamera (1), die auf einem Streifenprojektionsverfahren zur dreidimensionalen optischen Vermessung eines dentalen Objekts (10) beruht, umfassend ein Projektionsgitter (2) zur Erzeugung eines Projektionsmusters (3) aus mehreren Streifen (5) und ein optisches System (4), das das erzeugte Projektionsmuster (3) auf das zu vermessende Objekt (10) projiziert. In einem ersten Schritt wird mittels der dentalen Kamera (1) eine Referenzoberfläche (74) mit bekannten Abmessungen unter Verwendung des Streifenprojektionsverfahrens vermessen. Dabei werden Ist-Koordinaten (33, 36) von mehreren Messpunkten (11) auf der Referenzoberfläche (74) ermittelt, wobei die ermittelten Ist-Koordinaten (33, 36) mit Soll-Koordinaten (34) der Messpunkte (11) auf der Referenzoberfläche (74) verglichen werden. Im weiteren Verfahrensschritt werden ausgehend von den Abweichungen (35, 37) zwischen den Ist-Koordinaten (33, ...

Подробнее
29-10-2015 дата публикации

Tragbarer 3D-Scanner und Verfahren zum Erzeugen eines einem Objekt entsprechenden 3D-Scanergebnisses

Номер: DE102015207638A1
Принадлежит:

Ein tragbarer 3D-Scanner enthält mindestens zwei Bildsensoreinheiten und eine Tiefenzuordnungsgenerierungseinheit. Wenn der tragbare 3D-Scanner um ein Objekt herum bewegt wird, erfassen eine erste Bildsensoreinheit und eine zweite Bildsensoreinheit der mindestens zwei Bildsensoreinheiten jeweils eine Mehrzahl von das Objekt aufweisenden ersten Bildern, und eine Mehrzahl von das Objekt aufweisenden zweiten Bildern. Während die erste Bildsensoreinheit jedes erste Bild der Mehrzahl von ersten Bildern erfasst, existiert ein korrespondierender Abstand zwischen dem tragbaren 3D-Scanner und dem Objekt. Die Tiefenzuordnungsgenerierungseinheit generiert eine korrespondierende Tiefenzuordnung gemäß jedem ersten Bild und einem korrespondierenden zweiten Bild. Eine Mehrzahl von Tiefenzuordnungen, die von der Tiefenzuordnungsgenerierungseinheit generiert wurden, die Mehrzahl von ersten Bildern und die Mehrzahl von zweiten Bildern werden verwendet, um ein zu dem Objekt korrespondierendes Farb-3D-Scan-Ergebnis ...

Подробнее
11-11-2021 дата публикации

Rendern von Augmented-Reality mit Verdeckung

Номер: DE102021204765A1
Принадлежит:

AR-Elemente werden in Videoeinzelbildern verdeckt. Eine Tiefenkarte wird für ein Einzelbild eines Videos bestimmt, das von einer Videoaufnahmevorrichtung empfangen wird. Ein AR grafisches Element zum Überlagern des Einzelbilds wird empfangen. Eine Elemententfernung für AR grafische Elemente relativ zu einer Position eines Benutzers der Videoaufnahmevorrichtung (z. B. der geografischen Position der Videoaufnahmevorrichtung) wird ebenfalls empfangen. Basierend auf der Tiefenkarte für das Einzelbild wird eine Pixelentfernung für jedes Pixel im Einzelbild bestimmt. Die Pixelentfernungen der Pixel im Einzelbild werden mit der Elemententfernung verglichen, und als Reaktion darauf, dass eine Pixelentfernung für ein gegebenes Pixel kleiner als die Elemententfernung ist, wird das Pixel des Einzelbilds anstatt eines entsprechenden Pixels des AR grafischen Elements angezeigt.

Подробнее
08-07-2021 дата публикации

INFORMATIONSVERARBEITUNGSVORRICHTUNG, INFORMATIONSVERARBEITUNGSVERFAHREN UND PROGRAMM

Номер: DE112019004374T5
Принадлежит: SONY CORP, Sony Corporation

... [Aufgabe] Bereitstellen einer Informationsverarbeitungsvorrichtung, eines Informationsverarbeitungsverfahrens und eines Programms, mit denen es möglich ist, eine sehr genaue Tiefeninformation zu erfassen.[Lösung] Die Informationsverarbeitungsvorrichtung weist eine Interpolationsbilderzeugungseinheit, eine Differenzbilderzeugungseinheit und eine Tiefenberechnungseinheit auf. Auf der Grundlage eines ersten Normalbildes und eines zweiten Normalbildes von dem ersten Normalbild, einem Musterbild, das durch Bestrahlen mit Infrarotmusterlicht erhalten wird, und dem zweiten Normalbild erzeugtInterpolationsbilderzeugungseinheit ein Interpolationsbild, das einem Zeitpunkt entspricht, zu dem das Musterbild aufgenommen wurde. Die Differenzbilderzeugungseinheit erzeugt ein Differenzbild zwischen dem Interpolationsbild und dem Musterbild. Die Tiefenberechnungseinheit berechnet Tiefeninformation unter Verwendung des Differenzbildes.

Подробнее
31-10-2018 дата публикации

Bilderzeugungsvorrichtung

Номер: DE112013005574B4

Bilderzeugungsvorrichtung, umfassend:eine Projektionseinheit (22) zum Projizieren von Musterlicht einer vorgegebenen Wellenlänge in einem abgebildeten Raum;eine Bildgebungseinheit (11) zum Bildgeben des abgebildeten Raumes;eine Separationseinheit (16) zum Separieren einer Projektionsbildkomponente aus einem Bildgebungssignal, das durch Bildgeben durch die Bildgebungseinheit (11) erhalten wird, indem eine Differenz zwischen dem Bildgebungssignal genommen wird, das erhalten wird, wenn das Musterlicht projiziert wird, und dem Bildgebungssignal, das erhalten wird, wenn das Musterlicht nicht projiziert wird; undeine Abstandsinformations-Erzeugungseinheit (18) zum Erzeugen einer Abstandsinformation auf Grundlage einer Projektionsbildkomponente, die durch die Separationseinheit (16) separiert wird; wobeidie Abstandsinformations-Erzeugungseinheit (18) Projektionswinkel heller Orte in dem abgebildeten Projektionsmuster aus einer Anordnung der hellen Orte in dem Projektionsbild, repräsentiert durch ...

Подробнее
25-01-1996 дата публикации

Three=dimensional measuring device using twin CCD cameras and sample beam projector

Номер: DE0019525561A1
Принадлежит:

The 3D measurement system operates by forming several observation planes which associated a centre with a first concern and an intersecting line, and a number of straight view lines, associating a centre with a second camera and points on these intersecting lines with reference to a number of lines. By directing a sample light beam at an object plane which is with in the stereo image capture area, coordinates of a number of intersection points are calculated, each from an observation plane and the observation lines with reference to each intersecting line and a plane equation is formed from th coordinate values of the points of intersection.

Подробнее
06-09-2018 дата публикации

Verfahren zur trajektorienbasierten Merkmalszuordnung in einem von einer Bilderfassungseinheit eines Fahrzeugs erfassten Bild

Номер: DE102017117211B3

Vorliegend wird ein Verfahren bereitgestellt zum Zuordnen charakteristischer Merkmale in einem von einer Bilderfassungseinheit eines Fahrzeugs erfassten Bild eines charakteristischen Musters zu Einheiten des Scheinwerfers, welche Strukturen des charakteristischen Musters erzeugen, wobei nach Detektion der charakteristischen Merkmale in dem erfassten Bild eine Trajektorie zu mindestens einer Struktur berechnet wird, wobei die Trajektorie die Translation der Struktur in der Bildebene der Bilderfassungseinheit beschreibt in Abhängigkeit von einem Abstand zwischen dem Fahrzeug und einer Projektionsfläche des charakteristischen Musters und schließlich mittels einer berechneten Detektionsmaske entlang der Trajektorie in dem erfassten Bild die innerhalb der Detektionsmaske liegenden detektierten charakteristischen Merkmalen bestimmt werden, so dass eine trajektoriebasierte Zuordnung der charakteristischen Merkmale zu den Einheiten des Scheinwerfers erfolgen kann.

Подробнее
24-09-2020 дата публикации

Verfahren zum Feststellen einer Lagebeziehung zwischen einer Kamera und einem Scheinwerfer eines Fahrzeugs

Номер: DE102017124955B4

Verfahren zum Bestimmen einer Lagebeziehung zwischen einer Kamera und einem Scheinwerfer eines Fahrzeugs, aufweisend die folgenden Schritte:i) Positionieren des Fahrzeugs in einem Abstand vor einer Fläche, welche ein Kalibriermuster aufweist;ii) Ermitteln des Abstands zwischen dem Kalibriermuster und dem Fahrzeug;iii) Projizieren eines Musters auf die Fläche mittels eines Scheinwerfers des Fahrzeugs;iv) Detektieren von charakteristischen Merkmalen in dem projizierten Muster;v) Durchführen der Schritte i) bis iv) bei mindestens einem weiteren Abstand zwischen dem Fahrzeug und der Fläche; wobei der Abstand für eine vorbestimmte Anzahl verändert wird und die charakteristischen Merkmale den jeweiligen Abständen zugeordnet werden;vi) Interpolieren von Positionen von jeweils einander bei den verschiedenen Abständen entsprechenden detektierten charakteristischen Merkmalen mittels einer linearen Funktion;vii) Bestimmen eines Schnittpunkts der ermittelten linearen Funktionen;viii) Bestimmen der ...

Подробнее
10-06-1998 дата публикации

Eye position detecting apparatus and method therefor

Номер: GB0009807620D0
Автор:
Принадлежит:

Подробнее
10-08-2005 дата публикации

Apparatus and methods for three dimensional scanning

Номер: GB0002410794A
Принадлежит:

A structured light technique where multiple uniform uncoded stripes are projected upon an object and radiation received by a camera to produce a pixellated bitmap image. Processing means is configured to process the image into an array of peaks, search the peaks for discontinuities and create an occlusion map.

Подробнее
03-12-2008 дата публикации

Apparatus and method for body characterisation

Номер: GB0002449648A
Принадлежит:

A body characterising system comprises a computing device 10, an optical imaging device 756 to capture one or more images, and a scale estimation means; the computing device comprises image processing means to identify the image of the user within the background of the one or more captured images, and the computing device is operable to estimate the scale of the image of the user from the scale estimation means, and to generate data descriptive of the user's body dependent upon to the identified image of the user and the estimated scale of the identified image of the user. The body data may be used with a networked server for selecting goods such as clothes and furniture suitable for the body data.

Подробнее
01-11-2017 дата публикации

Vehicle lane placement

Номер: GB0002549836A
Принадлежит:

A method and computer system for determining and controlling vehicle placement relative to roadway features such as lane markers, curbs or the roadway edge. The method comprises sending video 52 and light detecting and ranging (LIDAR) 50 data from sources affixed to a vehicle to a recurrent neural network (RNN) 54, which includes feedback between layers (see figure 3), and to a dynamic convolutional neural network (DCNN) 56 to identify a roadway feature. A softmax decision network 58 aggregates the outputs of the RNN and DCNN to determine the position of the vehicle on the roadway. This information is used to control the steering, braking and acceleration of the vehicle. The computer may extract a video frame image and a LIDAR frame image from the data (110, fig. 5; 210, fig. 6) and convert these into machine readable images (115, fig 5) for input into the RNN and DCNN. The method may also include comparing the vehicle position to a set of training data images to establish an error rate ...

Подробнее
02-12-2015 дата публикации

3D scene rendering

Номер: GB0201518613D0
Автор:
Принадлежит:

Подробнее
24-12-2002 дата публикации

System and method for confirming electrical connection defects

Номер: GB0000227117D0
Автор:
Принадлежит:

Подробнее
03-07-2019 дата публикации

Image processing

Номер: GB0201907202D0
Автор:
Принадлежит:

Подробнее
11-03-2020 дата публикации

Method and device for three dimensional measurement using feature quantity

Номер: GB0002577013A
Принадлежит:

A relationship between spatial coordinates and a plurality of feature quantities obtained on the basis of patterns projected from a plurality of projection units and/or a change of the patterns is previously obtained, and using the relationship between the feature quantities and the spatial coordinates, the spatial coordinates of a surface of a subject to be measured are obtained on the basis of feature quantities obtained on the basis of patterns projected on the surface from the projection units or a change of the patterns.

Подробнее
15-12-2008 дата публикации

PROCEDURE AND DEVICE FOR OPTICAL SCANNING OF A VEHICLE WHEEL

Номер: AT0000417254T
Принадлежит:

Подробнее
15-01-2008 дата публикации

PORTABLE COORDINATE MEASURING MACHINE WITH GELENKARM

Номер: AT0000382845T
Принадлежит:

Подробнее
15-02-2011 дата публикации

VERFAHREN ZUR AUFNAHME DREIDIMENSIONALER ABBILDER

Номер: AT0000508563A4
Автор:
Принадлежит:

Подробнее
15-08-1993 дата публикации

OPTICAL DEVICE FOR MEASURING SURFACE OUTLINES.

Номер: AT0000093052T
Принадлежит:

Подробнее
19-11-2020 дата публикации

Control system for laser projector and mobile terminal

Номер: AU2019277762A1
Принадлежит: Shelston IP Pty Ltd.

A mobile terminal (100) and a control system (30) for a laser projector (10). The control system (30) comprises a first driving circuit (31), a second driving circuit (32), a monitoring timer (34), and a microprocessor (35) and an application processor (33) which are connected to the monitoring timer (34). The first driving circuit (31) is connected to the laser projector (10) and used for outputting an electric signal to drive the laser projector (10) to project laser. The second driving circuit (32) is connected to the first driving circuit (31) and used for supplying power to the first driving circuit (31). The monitoring timer (34) is connected to the second driving circuit (32). The microprocessor (35) is used for transmitting a first predetermined signal to the monitoring timer (34). The application processor (33) is used for transmitting a second predetermined signal to the monitoring timer (34). When the monitoring timer (34) fails to read the first predetermined signal or the second ...

Подробнее
24-12-2020 дата публикации

IOTM- Class Monitoring System: IoT and AI based Monitoring System for Faculty and their Course Materials Availability

Номер: AU2020102908A4
Принадлежит: Kavita Sharma

Our Invention "IOTM- Class Monitoring System" is an image such as a depth image of a scene may be received, observed, or captured by a device and also a grid of voxels may then be generated based on the depth image such that the depth image may be down sampled. The invented technology also includes in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a Teachers, students, director or human target and a location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. The invented technology also online education and the system of the invention includes an educator provider system and at least one student system connected via a network means, such as the Internet, for bidirectional communication there between. The educator provider system is capable of transmitting, and the at least one student system is ...

Подробнее
24-09-2020 дата публикации

A Multi-sensor Data Fusion Based Vehicle Cruise System and Method

Номер: AU2020101561A4
Принадлежит: Alder IP Pty Ltd

The invention discloses a multi-sensor data fusion-based vehicle cruise system and a method, relating to the field of vehicle safety. A vehicle cruise system and method, using multi-sensors to sense the environment, and re-characterizing the information of different sensors in order to realize high-level nonlinear data fusion, using the fused data to build a local 3D map to assist decision-making, can achieve the system's depth perception of the driving environment and realize good self adaptation and decision-making. In addition, the invention can identify the types of obstacles, make more effective decision feedback for obstacles with higher danger degree, and improve the real-time decision and flexibility of the vehicle cruise system. The multi-sensor data fusion-based vehicle cruise system and the method provided by the invention have the strong comprehensive capability of situation awareness to perceive obstacles around the vehicle, strong strain flexibility to obstacles, diversified ...

Подробнее
19-11-2020 дата публикации

Synchronized spinning lidar and rolling shutter camera system

Номер: AU2018341833B2
Принадлежит: RnB IP Pty Ltd

One example system comprises a LIDAR sensor that rotates about an axis to scan an environment of the LIDAR sensor. The system also comprises one or more cameras that detect external light originating from one or more external light sources. The one or more cameras together provide a plurality of rows of sensing elements. The rows of sensing elements are aligned with the axis of rotation of the LIDAR sensor. The system also comprises a controller that operates the one or more cameras to obtain a sequence of image pixel rows. A first image pixel row in the sequence is indicative of external light detected by a first row of sensing elements during a first exposure time period. A second image pixel row in the sequence is indicative of external light detected by a second row of sensing elements during a second exposure time period.

Подробнее
03-06-2004 дата публикации

THREE-DIMENSIONAL SHAPE MEASURING METHOD AND ITS DEVICE

Номер: AU2003280776A1
Принадлежит:

Подробнее
24-07-2003 дата публикации

STEREOSCOPIC THREE-DIMENSIONAL METROLOGY SYSTEM AND METHOD

Номер: AU2002357335A1
Принадлежит:

Подробнее
05-03-2020 дата публикации

3D point clouds

Номер: AU2015250748B2
Принадлежит: Madderns Pty Ltd

Provided is a method for generating a 3D point cloud and colour visualisation of an underwater scene, the point cloud comprising a set of (x, y, z) coordinates relating to points in the scene, the method operating in a system comprising at least one camera module, at least one structured light source, and a processing module, each of the at least one camera module being directed at the scene and having substantially the same overlapped field of view.

Подробнее
10-03-2016 дата публикации

Systems and methods for fibre placement inspection during fabrication of fibre reinforced composite components

Номер: AU2015203435A1
Принадлежит:

SYSTEMS AND METHODS FOR FIBRE PLACEMENT INSPECTION DURING FABRICATION OF FIBRE REINFORCED COMPOSITE The disclosed systems and methods relate to inspecting uncured fibre-reinforced composite components by non-contact 3D measurements of the component using 3D digital image correlation with patterned illumination. Systems comprise a light projector configured to project a light pattern (102) onto a form (30), a digital camera configured to image the light pattern (102), and may comprise and/or be associated with an AFP machine that is configured to lay uncured composite on the form (30). Methods comprise projecting a light pattern (102) onto a form (30), acquiring a baseline 3D profile (104) of the form (30) by imaging the light pattern (102) on the form (30), laying an uncured fibre piece (106) onto the form (30), projecting the light pattern (102) onto the uncured fibre piece (106), acquiring a test 3D profile (110) of the fibre piece by imaging the light pattern (102) on the uncured fibre ...

Подробнее
27-08-2015 дата публикации

Image processing method for determining focus depth of a refractive laser

Номер: AU2011384704B2
Автор: WARM BERNDT, WARM, BERNDT
Принадлежит:

The present invention relates to a laser apparatus, system, and method for determining a depth of a focus point of a laser beam. An interface device is coupleable to the laser apparatus and has an applanation element comprising a front surface and a back surface. A laser beam having a predefined shape is focussed through the applanation element at a focus point. A superimposed image of a spurious reflection, which is reflected from the front surface of the applanation element, with a standard reflection, which is reflected from the back surface of the applanation element, is detected. The spurious reflection is then filtered out of the superimposed image. Based on the remaining standard reflection, the depth of the focus point of the laser beam can be determined.

Подробнее
24-09-2015 дата публикации

Full-field three dimensional surface measurement

Номер: AU2014251322A1
Принадлежит:

Embodiments of the present invention may be used to perform measurement of surfaces, such as external and internal surfaces of the human body, in full-field and in 3-D. An electromagnetic radiation source may be configured to project the electromagnetic radiation in a pattern corresponding to a spatial signal modulation algorithm. The electromagnetic radiation source may also be configured to project the electromagnetic radiation at a frequency suitable for transmission through the media in which the radiation is projected. An image sensor may be configured to capture image data representing the projected pattern. An image-processing module may be configured to receive the captured image data from the image sensor and to calculate a full-field, 3-D representation of the surface using the captured image data and the spatial signal modulation algorithm. A display device may be configured to display the full-field, 3-D representation of the surface.

Подробнее
17-08-2017 дата публикации

Apparatus and method for three dimensional surface measurement

Номер: AU2013379669B2
Принадлежит: Shelston IP Pty Ltd.

A system and method for three-dimensional measurement of surfaces. In one embodiment, a measurement system includes a laser projector, a first camera, and a processor. The laser projector is configured to emit a laser projection onto a surface for laser triangulation. The first camera is configured to provide images of the surface, and is disposed at an oblique angle with respect to the laser projector. The processor is configured to apply photogrammetric processing to the images, to compute calibrations for laser triangulation based on a result of the photogrammetric processing, and to compute, based on the calibrations, coordinates of points of the surface illuminated by the laser projection via laser triangulation.

Подробнее
23-01-2014 дата публикации

3D scanner using structured lighting

Номер: AU2012260548A1
Принадлежит:

A series of structured lighting patterns are projected on an object. Each successive structured lighting pattern has a first and second subset of intensity features such as edges between light and dark areas. The intensity features of the first set coincide spatially with intensity features from either the first or second subset from a preceding structured lighting pattern in the series. Image positions are detected where the intensity features of the first and second subset of the structured lighting patterns are visible in the images. Image positions where the intensity features of the first subset are visible are associated with the intensity features of the first subset, based on the associated intensity features of closest detected image positions with associated intensity features in the image obtained with a preceding structured lighting pattern in said series. Image positions where the intensity features of the second subset are visible, between pairs of the image positions associated ...

Подробнее
17-11-2016 дата публикации

Depth estimation using multi-view stereo and a calibrated projector

Номер: AU2015284556A1
Принадлежит: Davies Collison Cave Pty Ltd

The subject disclosure is directed towards using a known projection pattern to make stereo (or other camera-based) depth detection more robust. Dots are detected in captured images and compared to the known projection pattern at different depths, to determine a matching confidence score at each depth. The confidence scores may be used as a basis for determining a depth at each dot location, which may be at sub-pixel resolution. The confidence scores also may be used as a basis for weights or the like for interpolating pixel depths to find depth values for pixels in between the pixels that correspond to the dot locations.

Подробнее
29-03-1994 дата публикации

Hidden change distribution grating and use in 3d moire measurement sensors and cmm applications

Номер: AU0004843793A
Автор: FITTS JOHN M
Принадлежит:

Подробнее
25-04-2017 дата публикации

METHOD AND DEVICE FOR THREE-DIMENSIONAL SURFACE DETECTION WITH A DYNAMIC REFERENCE FRAME

Номер: CA0002762038C
Принадлежит: HAEUSLER, GERD, HAEUSLER GERD

The surface shape of a three-dimensional object is acquired with an optical sensor. The sensor, which has a projection device and a camera, is configured to generate three-dimensional data from a single exposure, and the sensor is moved relative to the three-dimensional object, or vice versa. A pattern is projected onto the three- dimensional object and a sequence of overlapping images of the projected pattern is recorded with the camera. A sequence of 3D data sets is determined from the recorded images and a registration is effected between subsequently obtained 3D data sets. This enables the sensor to be moved freely about the object, or vice versa, without tracking their relative position, and to determine a surface shape of the three-dimensional object on the fly.

Подробнее
21-05-2021 дата публикации

SYSTEMS AND METHODS FOR MEASURING BODY SIZE

Номер: CA3096836A1
Принадлежит:

Подробнее
21-08-2021 дата публикации

SYSTEMS AND METHODS FOR DETERMINING SPACE AVAILABILITY IN AN AIRCRAFT

Номер: CA3102512A1
Принадлежит:

An example system for determining space availability in an aircraft includes a plurality of laser sensors configured to be positioned in a baggage container at a first wall and a second wall, and the first wall and the second wall face each other. The plurality of laser sensors emit signals within the baggage container and detect reflected responses to generate outputs. The system also includes one or more processors in communication with the plurality of laser sensors for executing instructions stored in non-transitory computer readable media to perform functions including receiving the outputs from the plurality of laser sensors, mapping contents of the baggage container based on the outputs from the plurality of laser sensors, and based on said mapping, outputting data indicative of occupied space in the baggage container.

Подробнее
07-05-2020 дата публикации

TIME-OF-FLIGHT SENSOR WITH STRUCTURED LIGHT ILLUMINATOR

Номер: CA3117773A1
Принадлежит:

The present disclosure relates to systems and methods that provide information about a scene based on a time-of-flight (ToF) sensor and a structured light pattern. In an example embodiment, a sensor system could include at least one ToF sensor configured to receive light from a scene. The sensor system could also include at least one light source configured to emit a structured light pattern and a controller that carries out operations. The operations include causing the at least one light source to illuminate at least a portion of the scene with the structured light pattern and causing the at least one ToF sensor to provide information indicative of a depth map of the scene based on the structured light pattern.

Подробнее
28-09-2017 дата публикации

THREE-DIMENSIONAL MEASUREMENT SENSOR BASED ON LINE STRUCTURED LIGHT

Номер: CA0003021730A1
Принадлежит:

A three-dimensional measurement sensor based on line structured light, comprising a sensing head and a controller. The sensing head is used for collecting section data and attitude information of its own, and matching the section data with the self attitude information. The sensing head comprises a three-dimensional camera, an attitude sensor, a laser and a control sub-board, wherein the three-dimensional camera is installed at a certain angle relative to the laser, and acquires elevation and grey information about an object surface corresponding to laser rays using a triangulation principle. The attitude sensor, the three-dimensional camera and the laser are installed on the same rigid plane, and the attitude sensor reflects measurement attitudes of the three-dimensional camera and the laser in real time. The controller is used for measuring and controlling the sensing head, performing data processing transmission and supporting external control. The sensor realizes synchronous measurement ...

Подробнее
03-04-2020 дата публикации

IMAGE ACQUISITION PROCESS

Номер: CA0003057338A1
Принадлежит:

Procédé d'acquisition d'images comportant le déplacement, en une pluralité d'emplacements d'acquisition (61,62), d'un dispositif d'acquisition (50) comportant au moins une caméra (5,65), et l'acquisition en chaque emplacement d'acquisition d'au moins une image d'une scène au moyen de la caméra, chaque emplacement d'acquisition étant choisi de manière à ce que : - les scènes (63, 64) vues par la caméra en deux emplacements d'acquisition consécutifs et les images (70,75) correspondantes se recouvrent au moins partiellement, et - la densité surfacique de pixels attribuée à au moins un élément de la scène correspondante qui est représenté dans l'image correspondante par une portion à haute résolution (85a-c), soit supérieure à 50 %, de préférence supérieure à 80 % d'une densité surfacique cible, la densité surfacique de pixels étant définie comme le rapport de l'aire de l'élément projeté dans un plan perpendiculaire à l'axe optique de la caméra sur la quantité de pixels de la portion à haute ...

Подробнее
09-10-2014 дата публикации

SYSTEM AND METHOD FOR CONTROLLING AN EQUIPMENT RELATED TO IMAGE CAPTURE

Номер: CA0002908719A1
Принадлежит: ROBIC

A method and system for controlling a setting of an equipment related to image capture comprises capturing position data and orientation data of a sensing device; determining position information of a region of interest (i.e. a node) to be treated by the equipment, relative to the position and orientation data of the sensing device; and outputting a control signal directed to the equipment, in order to control in real-time the setting of the equipment based on said position information of the region of interest.

Подробнее
09-12-2010 дата публикации

METHOD AND DEVICE FOR THREE-DIMENSIONAL SURFACE DETECTION WITH A DYNAMIC REFERENCE FRAME

Номер: CA0002957077A1
Принадлежит:

... ²²The surface shape of a three-dimensional object is acquired with an optical ²sensor. ²The sensor, which has a projection device and a camera, is configured to ²generate ²three-dimensional data from a single exposure, and the sensor is moved ²relative to ²the three-dimensional object, or vice versa. A pattern is projected onto the ²three-dimensional object and a sequence of overlapping images of the projected ²pattern is ²recorded with the camera. A sequence of 3D data sets is determined from the ²recorded images and a registration is effected between subsequently obtained ²3D ²data sets. This enables the sensor to be moved freely about the object, or ²vice versa, ²without tracking their relative position, and to determine a surface shape of ²the ²three-dimensional object on the fly.² ...

Подробнее
18-10-2017 дата публикации

METHOD AND SYSTEM FOR AIRCRAFT TAXI STRIKE ALERTING

Номер: CA0002956752A1
Принадлежит:

Apparatus and associated methods relate to ranging an object nearby an aircraft by triangulation of spatially-patterned light projected upon and reflected from the object. The spatially patterned light can have a wavelength corresponding to infrared light and/or to an atmospheric absorption band. In some embodiments, images of the object are captured both with and without illumination by the spatially-patterned light. A difference between these two images can be used to isolate the spatially-patterned light. The two images can also be used to identify pixel boundaries of the object and to calculate ranges ol portions of the object corresponding to pixels imaging these portions. For pixels imaging reflections of the spatially-patterned light, triangulation can be used to calculate range. For pixels not imaging reflections of the spatially-patterned light, ranges can be calculated using one or more of the calculated ranges calculated using triangulation corresponding to nearby pixels.

Подробнее
24-09-2019 дата публикации

SHAPE MEASUREMENT APPARATUS AND SHAPE MEASUREMENT METHOD

Номер: CA0002982101C

To provide a shape measurement apparatus that, in measuring the unevenness shape of a measurement object by a light-section method, enables the shape of the measurement object to be measured precisely even when the distance between the measurement object and an image capturing apparatus fluctuates. [Solution] Provided is a shape measurement apparatus including: a linear light position detection unit that detects, from a captured image of linear light applied to a measurement object by a linear light irradiation apparatus that is captured by an image capturing apparatus, a linear light position of the linear light; a distance computation unit that computes a distance from the image capturing apparatus to the measurement object, on the basis of a distance difference between a reference linear light position detected by the linear light position detection unit when the measurement object is positioned at a position of a predetermined reference distance from the image capturing apparatus and ...

Подробнее
21-03-2013 дата публикации

IMPROVED LASER RANGEFINDER SENSOR

Номер: CA0002848701A1
Принадлежит:

The specification discloses a pulsed time-of-flight laser range finding system used to obtain vehicle classification information. The sensor determines a distance range to portions of a vehicle traveling within a sensing zone of the sensor. A scanning mechanism made of a four facet cube, having reflective surfaces, is used to collimate and direct the laser toward traveling vehicles. A processing system processes the respective distance range data and angle range data for determining the three-dimensional shape of the vehicle.

Подробнее
11-02-2011 дата публикации

SYSTEM FOR ADAPTIVE THREE-DIMENSIONAL SCANNING OF SURFACE CHARACTERISTICS

Номер: CA0002731680A1
Принадлежит:

There are provided sys-tems and methods for obtaining a three--dimensional surface geometric charac-teristic and/or texture characteristic of an object. A pattern is projected on a surface of said object. A basic 2D image of said object is acquired; a characteris-tic 2D image of said object is acquired; 2D surface points are extracted from said basic 2D image, from a reflection of said projected pattern on said object; a set of 3D surface points is calculated in a sensor coordinate system using said 2D surface points; and a set of 2D sur-face geometric/texture characteristics is extracted.

Подробнее
29-12-2011 дата публикации

OPTICAL MEASUREMENT METHOD AND MEASUREMENT SYSTEM FOR DETERMINING 3D COORDINATES ON A MEASUREMENT OBJECT SURFACE

Номер: CA0002801595A1
Принадлежит:

The invention relates to an optical measurement method for determining 3D coordinates for a multiplicity of measurement points of a measurement object surface (1s). For this purpose, the measurement object surface (1s) is illuminated with a pattern sequence of various patterns (2a, 2b) using a projector (3), an image sequence of the measurement object surface (1s) that is illuminated with the pattern sequence is recorded with a camera system (4), and the 3D coordinates for the measurement points are determined by evaluation of the image sequence. According to the invention, while the image sequence is being recorded, at least during the illumination times of individual images of the image sequence, translational and/or rotational accelerations of the projector (3), the camera system (4) and/or the measurement object (1) are measured at least at a measurement rate such that, during the illumination times of the respectively individual images of the image sequence, in each case a plurality ...

Подробнее
23-01-2014 дата публикации

3-D SCANNING AND POSITIONING INTERFACE

Номер: CA0002875754A1
Принадлежит:

A system and a method for providing an indication about positioning unreliability are described. The system comprises a scanner for scanning a surface geometry of an object and accumulating 3D points for each frame using shape-based positioning; a pose estimator for estimating an estimated pose for the scanner using the 3D points; an unreliable pose detector for determining if the estimated pose has an under constrained positioning and an indication generator for generating an indication that the unreliable pose estimation is detected. In one embodiment, a degree of freedom identifier identifies a problematic degree of freedom in the estimated pose. In one embodiment, a feature point detector detects a reobservable feature point and the pose estimator uses the feature point with the 3D points to estimate the estimated pose and the unreliable pose detector uses the feature point to identify the estimated pose as an unreliable pose estimation.

Подробнее
12-09-2017 дата публикации

NON CONTACT WHEEL ALIGNMENT SENSOR AND METHOD

Номер: CA0002887242C

A sensor and method of determining the orientation of an object, such as the alignment characteristics of a tire and wheel assembly mounted on a vehicle, includes projecting a plurality of light planes from a first light projector onto a tire and wheel assembly to form a plurality of generally parallel illumination lines on a tire of the tire and wheel from the tire at an angle relative to a projecting angle of the first light projector, and determining a plane defined by spatial coordinates from a selected point located on each illumination line imaged by the photo electric device, with the plane representing the orientation of the tire and wheel assembly.

Подробнее
02-08-2019 дата публикации

Optical element driving mechanism

Номер: CN0110082878A
Автор:
Принадлежит:

Подробнее
16-02-2018 дата публикации

Range image acquisition apparatus and range image acquisition method

Номер: CN0107709925A
Принадлежит:

Подробнее
26-04-2019 дата публикации

SYSTEMS AND METHODS FOR IMPROVED DEPTH SENSING

Номер: CN0109691092A
Принадлежит:

Подробнее
03-04-2020 дата публикации

Multi-image projector and electronic device with multi-image projector

Номер: CN0110955056A
Автор:
Принадлежит:

Подробнее
10-08-2018 дата публикации

Calibration method and measuring tool

Номер: CN0105934648B
Автор:
Принадлежит:

Подробнее
04-08-1995 дата публикации

Method for taking three-dimensional images of loads.

Номер: FR0002702055B1
Принадлежит:

Подробнее
26-03-2020 дата публикации

Deformation processing support system and deformation processing supporting method

Номер: KR0102093674B1
Автор:
Принадлежит:

Подробнее
20-03-2008 дата публикации

METHOD AND SYSTEM FOR MEASURING THE RELIEF OF AN OBJECT

Номер: KR0100815503B1
Автор:
Принадлежит:

Подробнее
16-10-2019 дата публикации

Номер: KR0102033143B1
Автор:
Принадлежит:

Подробнее
08-12-2020 дата публикации

SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM, AND MOVING BODY

Номер: KR1020200136905A
Автор:
Принадлежит:

Подробнее
02-05-2018 дата публикации

구조형 광에 대한 코드 도메인 파워 제어

Номер: KR1020180044415A
Принадлежит:

... 구조형 광 레이저 시스템들을 제어하기 위한 시스템들 및 방법들이 개시된다. 하나의 양태는 구조형 광 시스템이다. 시스템은 심도 맵을 저장하도록 구성된 메모리 디바이스를 포함한다. 시스템은 코드워드들을 투영하도록 구성된 레이저 시스템을 포함하는 이미지 투영 디바이스를 더 포함한다. 시스템은 센서를 포함하는 수신기 디바이스를 더 포함하고, 이 수신기 디바이스는 오브젝트로부터 반사된 투영된 코드워드들을 감지하도록 구성된다. 시스템은, 심도 맵의 부분을 취출하고, 심도 맵으로부터 예상되는 코드워드들을 계산하도록 구성된 프로세싱 회로를 더 포함한다. 시스템은, 감지된 코드워드들 및 예상되는 코드워드들에 기초하여 레이저 시스템의 출력 파워를 제어하도록 구성된 피드백 시스템을 더 포함한다.

Подробнее
14-02-2017 дата публикации

촬상 모듈 및 촬상 장치

Номер: KR1020170016943A
Автор: 사이토 테츠로
Принадлежит:

... 촬상 모듈은, 입사 광속을 공간적으로 변조하여 변조된 광속을 출사하는 공간 광 변조기와, 상기 공간 광 변조기에 의해 상기 변조된 광속을 화상 정보로서 획득하는 촬상 소자와, 상기 촬상 소자의 촬상면과 상기 공간 광 변조기 사이의 간격을 조정하는 조정기를 포함하며, 상기 조정기는 온도 상승에 응답하여 상기 촬상면과 상기 공간 광 변조기 사이의 간격을 감소시키도록 동작한다.

Подробнее
21-03-2019 дата публикации

Номер: KR1020190030228A
Автор:
Принадлежит:

Подробнее
09-12-2020 дата публикации

IMAGE PROCESSING METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND ELECTRONIC APPARATUS

Номер: KR1020200138298A
Автор:
Принадлежит:

Подробнее
20-09-2017 дата публикации

SCANNING DEVICE AND OPERATING METHOD THEREOF

Номер: KR1020170105701A
Принадлежит:

A scanning device according to the present invention includes a transmitting module for dividing a subject into a plurality of image areas and transmitting a laser beam to the image areas with an interlaced manner, a receiving module for receiving the laser beam reflected from the subject, and a signal processing module for scanning the shape of the subject by a time of flight (TOF) technique based on the reflected laser beam, wherein the signal processing module overlaps sub frames generated by the interlaced manner to generate a single image with respect to the shape of the subject. Accordingly, the present invention can implement a scanning 3D image with high precision and high speed without additional hardware. COPYRIGHT KIPO 2017 ...

Подробнее
20-10-2017 дата публикации

SCANNING DEVICE AND OPERATING METHOD THEREOF

Номер: KR1020170116635A
Принадлежит:

A scanning device according to the present invention includes a transmission module which divides a scan object into a plurality of scan regions and alternately irradiates the sequentially arranged scan regions with a first laser beam and a second laser beam, a reception module which receives the laser beam reflected from each of the scan regions, and an image generating module which generates a divided image with respect to each of the scan regions, and generates the entire image of the scan object based on the divided image. The transmission module generates the first laser beam by triggering a rising edge of the laser beam and generates the second laser beam by triggering a falling edge of the laser beam. Accordingly, the present invention can more clearly scan the scan object by removing an inference effect. COPYRIGHT KIPO 2017 ...

Подробнее
01-06-2020 дата публикации

Infrared pre-flash for camera

Номер: TW0202021339A
Принадлежит:

The image capture system of the present disclosure comprises an illuminator comprising at least one infrared light LED or laser and one visible light LED, an image sensor that is sensitive to infrared light and visible light, a memory configured to store instructions, and a processor configured to execute instructions to cause the image capture system to emit infrared light as a pre-flash, receive image data comprising at least one infrared image, and determine infrared exposure settings based on the infrared image data. Although the image capture system of the present disclosure is described as having an illuminator configured to emit infrared light, the illuminator may be configured to emit other invisible light. For example, in an alternate embodiment, the illuminator is configured to emit UV light during pre-flash. In such an embodiment, the image sensor is configured to detect UV light.

Подробнее
25-06-2019 дата публикации

Depth-sensing device and depth-sensing method

Номер: US0010334232B2
Принадлежит: HIMAX TECHNOLOGIES LIMITED, HIMAX TECH LTD

A depth-sensing device and its method are provided. The depth-sensing device includes a projection device, an image capture device, and an image processing device. The projection device projects a first projection pattern to a field at a first time and projects a second projection pattern to the same field at a second time. The density of the first projection pattern is lower than the density of the second projection pattern. The image capture device captures the first projection pattern projected to the field at the first time to obtain a first image and captures the second projection pattern projected to the field at the second time to obtain a second image. The image processing device processes the first and second images to obtain two depth maps and at least merges the depth maps to generate a final depth map of the field.

Подробнее
19-06-2003 дата публикации

Method for the extraction of image features caused by structure light using template information

Номер: US2003113020A1
Автор:
Принадлежит:

A method for use with a non-contact range finding and measurement system for generating a template guide representative of the surface of a observed object, and for utilizing the template guide to improve laser stripe signal to noise ratios and to compensate for corrupted regions in images of the observed object to improve measurement accuracy.

Подробнее
29-06-2021 дата публикации

Method and apparatus for performing image processing, and computer readable storage medium

Номер: US0011050918B2

A method and an apparatus for processing data, and a computer readable storage medium. The method includes: turning on at least one of a floodlight or a laser light, and operating a laser camera to collect a target image in response to a first processing unit receiving an image collection instruction sent by a second processing unit; and performing processing on the target image via the first processing unit, and sending the target image processed to the second processing unit.

Подробнее
22-06-2021 дата публикации

Depth sensing robotic hand-eye camera using structured light

Номер: US0011040452B2
Принадлежит: ABB Schweiz AG, ABB SCHWEIZ AG

The disclosed system includes a robot configured to perform a task on a workpiece. A camera having a field of view is operably connected to the robot. A light system is configured to project structured light onto a region of interest having a smaller area within the field of view. A control system is operably coupled to the robot and the camera is configured to determine a depth of the workpiece relative to a position of the robot using the structured light projected onto the workpiece within the region of interest.

Подробнее
18-01-2022 дата публикации

Three-dimensional scanning system

Номер: US0011226198B2
Принадлежит: BENANO INC.

A three-dimensional scanning system includes a projection light source, an image capturing apparatus, and a signal processing apparatus. The projection light source is configured to project a two-dimensional light to a target, where the two-dimensional light has a spatial frequency. The image capturing apparatus captures an image of the target illuminated with the two-dimensional light. The signal processing apparatus is coupled to the projection light source and the image capturing apparatus, to analyze a definition of the image of the two-dimensional light, where if the definition of the image of the two-dimensional light is lower than a requirement standard, the spatial frequency of the two-dimensional light is reduced.

Подробнее
17-02-2022 дата публикации

METHOD, APPARATUS FOR SUPERIMPOSING LASER POINT CLOUDS AND HIGH-PRECISION MAP AND ELECTRONIC DEVICE

Номер: US20220050210A1
Принадлежит:

The present application discloses a method, an apparatus for superimposing laser point clouds and a high-precision map and an electronic device, comprising: when superimposing the laser point clouds and the high-precision map for visualization, first performing dilution processing on non-road laser point clouds in the laser point clouds to be fused to obtain target non-road laser point clouds, which effectively reduces the amount of data of the laser point clouds to be fused; and performing segmentation processing on the map to be fused to obtain multi-level map data corresponding to the map to be fused, which effectively reduces the amount of data of the map to be fused. In this way, when superimposing the laser point clouds and the high-precision map for visualization, both non-road information may be visualized through the laser point clouds, and road information may be visualized through the high-precision map.

Подробнее
10-03-2022 дата публикации

VISUAL, DEPTH AND MICRO-VIBRATION DATA EXTRACTION USING A UNIFIED IMAGING DEVICE

Номер: US20220076436A1
Принадлежит:

A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.

Подробнее
22-03-2022 дата публикации

Systems and methods for multi-target tracking and autofocusing based on deep machine learning and laser radar

Номер: US0011283986B2
Автор: Leilei Miao, Di Wu, Zisheng Cao
Принадлежит: SZ DJI TECHNOLOGY CO., LTD.

Systems and methods for recognizing, tracking, and focusing a moving target are disclosed. In accordance with the disclosed embodiments, the systems and methods may recognize the moving target traveling relative to an imaging device; track the moving target; and determine a distance to the moving target from the imaging device.

Подробнее
03-10-2000 дата публикации

Scanning arrangement and method

Номер: US0006128086A
Автор:
Принадлежит:

A scanning arrangement comprising means (40) for projecting a two-dimensional optical pattern on a surface (35) of a scanned object and a two-dimensional photodetector (34') is mounted within a hand-held scanning device and outputs scan files of 3D coordinate data of overlapping surface portions of the object. The surface portions defined by these scan files are registered by appropriate rotations and translations in a computer, which are determined either from the outputs of a gyroscope (51) and an accelerometer (50) or by mathematical processing of the surface portions e.g. involving detection and location of common features.

Подробнее
05-09-2017 дата публикации

Information processing apparatus, control method thereof and storage medium

Номер: US0009752870B2

An information processing apparatus for generating a projection pattern used in three-dimensional measurement of a target object, comprising: determination means for determining a projection code string for generating the projection pattern based on distance information of the target object which is obtained in advance; and generation means for generating a pattern image of the projection pattern based on a projection code string determined by the determination means.

Подробнее
19-05-2016 дата публикации

OBJECT DETECTION APPARATUS AND METHOD THEREFOR, AND IMAGE RECOGNITION APPARATUS AND METHOD THEREFOR

Номер: US20160140399A1
Принадлежит:

An object detection apparatus includes an extraction unit configured to extract a plurality of partial areas from an acquired image, a distance acquisition unit configured to acquire a distance from a viewpoint for each pixel in the extracted partial area, an identification unit configured to identify whether the partial area includes a predetermined object, a determination unit configured to determine, among the partial areas identified to include the predetermined object by the identification unit, whether to integrate identification results of a plurality of partial areas that overlap each other based on the distances of the pixels in the overlapping partial area, and an integration unit configured to integrate the identification results of the plurality of partial areas determined to be integrated to detect a detection target object from the integrated identification result of the plurality of partial areas.

Подробнее
26-09-2019 дата публикации

REPLICATED DOT MAPS FOR SIMPLIFIED DEPTH COMPUTATION USING MACHINE LEARNING

Номер: US20190295269A1
Принадлежит:

Disclosed embodiments include methods and systems for utilizing a structured projection pattern to perform depth detection. In some instances, the structured projection pattern forms a dot pattern, which is projected by an infrared (IR) illuminator, wherein the dot pattern includes a replicated sub-pattern having a predefined height and width. The sub-pattern is replicated in at least one direction such that the dot pattern comprises a plurality of replicated sub-patterns that are adjacently positioned.

Подробнее
26-11-2019 дата публикации

Distance sensor projecting parallel patterns

Номер: US0010488192B2
Принадлежит: MAGIK EYE INC., MAGIK EYE INC, Magik Eye Inc.

In one embodiment, a method for calculating a distance to an object includes projecting a plurality of beams simultaneously from a light source, wherein the plurality of beams causes a plurality of lines of dots to be projected onto the object, and wherein the plurality of lines of dots are orientated parallel to each other, capturing an image of a field of view, wherein the object is visible in the image and the plurality of lines of dots is also visible in the image, and calculating the distance to the object using information in the image.

Подробнее
10-05-2012 дата публикации

Rotate and Hold and Scan (RAHAS) Structured Light Illumination Pattern Encoding and Decoding

Номер: US20120113229A1

A unique computer-implemented process, system, and computer-readable storage medium having stored thereon, executable program code and instructions for 3-dimentional (3-D) image acquisition of a contoured surface-of-interest under observation by at least one camera and employing a preselected SLI pattern. The system includes a 3-D video sequence capture unit having (a) one or more image-capture devices for acquiring video image data as well as color texture data of a 3-D surface-of-interest, and (b) a projector device for illuminating the surface-of-interest with a preselected SLI pattern in, first, an initial Epipolar Alignment and ending in alignment with an Orthogonal (i.e., phase) direction, and then shifting the preselected SLI pattern (‘translation’). A follow-up ‘post-processing’ stage of the technique includes analysis and processing of the 3-D video sequence captured, including the steps of: identification of Epipolar Alignment from the 3-D video sequence, tracking ‘snakes’/stripes from the initial Epipolar Alignment through alignment with the Orthogonal direction, tracking ‘snakes’/stripes through pattern shifting/translation; correcting for relative motion (object motion); determining phase and interpolating adjacent frames to achieve uniform phase shift; employing conventional PMP phase processing to obtain wrapped phase; unwrapping phase at each pixel using snake identity; using conventional techniques to map phase to world coordinates.

Подробнее
31-05-2012 дата публикации

Camera chip, camera and method for image recording

Номер: US20120133741A1
Автор: Christoph Wagner

The invention relates to a camera chip (C) for image acquisition. It is characterized in that pixel groups (P 1, P 2, . . . ) may be exposed at different times with the aid of shutter signals (S 1, S 2, . . . ).

Подробнее
28-02-2013 дата публикации

Structured-Light Based Measuring Method and System

Номер: US20130050476A1
Принадлежит: Shenzhen Taishan Online Tech Co Ltd

A structured-light measuring method, includes: matching process, in which the number and the low-precision depth of a laser point are achieved by using the imaging position of the laser point on a first camera ( 21 ) according to a first corresponding relationship in a calibration database, and the imaging position of the laser point on a second camera ( 22 ) is searched according to the number and the low-precision depth of the laser point so as to acquire the candidate matching points, then the matching process is completed according to the imaging position of the first camera ( 21 ) and the candidate matching points of the imaging position of the first camera ( 21 ) on the second camera ( 22 ) so that a matching result is achieved; and computing process, in which the imaging position of the second camera ( 22 ) matching with the imaging position of the first camera ( 21 ) is achieved according to the matching result, and then the precision position of the laser point is determined by a second corresponding relationship in the calibration database. A structured-light measuring system utilizes the above measuring method.

Подробнее
16-05-2013 дата публикации

Optical pattern projection

Номер: US20130120841A1
Принадлежит: PRIMESENSE LTD

Optical apparatus includes first and second diffractive optical elements (DOEs) arranged in series to diffract an input beam of radiation. The first DOE is configured to apply to the input beam a pattern with a specified divergence angle, while the second DOE is configured to split the input beam into a matrix of output beams with a specified fan-out angle. The divergence and fan-out angles are chosen so as to project the radiation onto a region in space in multiple adjacent instances of the pattern.

Подробнее
30-05-2013 дата публикации

Pattern generation using diffractive optical elements

Номер: US20130136305A1
Принадлежит: PRIMESENSE LTD

Apparatus ( 20 ) for 3D mapping of an object ( 28 ) includes an illumination assembly ( 30 ), including a coherent light source ( 32 ) and a diffuser ( 33 ), which are arranged to project a primary speckle pattern on the object. A single image capture assembly ( 38 ) is arranged to capture images of the primary speckle pattern on the object from a single, fixed location and angle relative to the illumination assembly. A processor ( 24 ) is coupled to process the images of the primary speckle pattern captured at the single, fixed angle so as to derive a 3D map of the object.

Подробнее
20-06-2013 дата публикации

Device for measuring three dimensional shape

Номер: US20130155191A1
Автор: Hiroyuki Ishigaki
Принадлежит: CKD Corp

A device for measuring three dimensional shape includes a first irradiation unit, a first grating control unit, a second irradiation unit, a second grating control unit, an imaging unit, and an image processing unit. After performance of a first imaging operation as imaging processing of a single operation among a multiplicity of imaging operations performed by irradiation of said first light pattern of multiply varied phases, a second imaging operation is performed as imaging processing of a single operation among a multiplicity of imaging operations performed by irradiation of said second light pattern of multiply varied phases. After completion of the first imaging operation and the second imaging operation, shifting or switching operation of the first grating and the second grating is performed simultaneously.

Подробнее
20-06-2013 дата публикации

Three-dimensional measurement apparatus, method for three-dimensional measurement, and computer program

Номер: US20130155417A1
Автор: Hiroyuki Ohsawa
Принадлежит: Canon Inc

On the basis of captured images at the time of projecting multiple-frequency slit-shaped light patterns having no overlapping edge positions onto an object, the edge portions of the slit-shaped light patterns are identified. When the edge portions overlap in the captured images of two or more slit-shaped light patterns, the reliability of the computed distance values of the positions corresponding to the edges is lowered.

Подробнее
20-06-2013 дата публикации

Voting-Based Pose Estimation for 3D Sensors

Номер: US20130156262A1

A pose of an object is estimated by first defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments. Model pair features are determined based on the set of pair features for a model of the object. Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object.

Подробнее
19-09-2013 дата публикации

Integrated processor for 3d mapping

Номер: US20130241824A1
Принадлежит: PRIMESENSE LTD

A device for processing data includes a first input port for receiving color image data from a first image sensor and a second input port for receiving depth-related image data from a second image sensor. Processing circuitry generates a depth map using the depth-related image data. At least one output port conveys the depth map and the color image data to a host computer.

Подробнее
24-10-2013 дата публикации

Distance-Varying Illumination and Imaging Techniques for Depth Mapping

Номер: US20130279753A1
Принадлежит: PRIMESENSE LTD

A method for mapping includes projecting a pattern onto an object ( 28 ) via an astigmatic optical element ( 38 ) having different, respective focal lengths in different meridional planes ( 54, 56 ) of the element. An image of the pattern on the object is captured and processed so as to derive a three-dimensional (3D) map of the object responsively to the different focal lengths.

Подробнее
07-11-2013 дата публикации

Pattern projection using micro-lenses

Номер: US20130294089A1
Принадлежит: PRIMESENSE LTD

An illumination assembly includes a light source, which is configured to emit optical radiation. A transparency containing a plurality of micro-lenses, which are arranged in a non-uniform pattern and are configured to focus the optical radiation to form, at a focal plane, respective focal spots in the non-uniform pattern. Optics are configured to project the non-uniform pattern of the focal spots from the focal plane onto an object.

Подробнее
07-11-2013 дата публикации

Zone driving

Номер: US20130297140A1
Принадлежит: Google LLC

A roadgraph may include a graph network of information such as roads, lanes, intersections, and the connections between these features. The roadgraph may also include one or more zones associated with particular rules. The zones may include locations where driving is typically challenging such as merges, construction zones, or other obstacles. In one example, the rules may require an autonomous vehicle to alert a driver that the vehicle is approaching a zone. The vehicle may thus require a driver to take control of steering, acceleration, deceleration, etc. In another example, the zones may be designated by a driver and may be broadcast to other nearby vehicles, for example using a radio link or other network such that other vehicles may be able to observer the same rule at the same location or at least notify the other vehicle's drivers that another driver felt the location was unsafe for autonomous driving.

Подробнее
28-11-2013 дата публикации

Reception of Affine-Invariant Spatial Mask for Active Depth Sensing

Номер: US20130315354A1
Принадлежит: Qualcomm Inc

A method operational on a receiver device for decoding a codeword is provided. At least a portion of a composite code mask is obtained, via a receiver sensor, and projected on the surface of a target object. The composite code mask may be defined by a code layer and a carrier layer. A code layer of uniquely identifiable spatially-coded codewords may be defined by a plurality of symbols. A carrier layer may be independently ascertainable and distinct from the code layer and may include a plurality of reference objects that are robust to distortion upon projection. At least one of the code layer and carrier layer may have been pre-shaped by a synthetic point spread function prior to projection. The code layer may be adjusted, at a processing circuit, for distortion based on the reference objects within the portion of the composite code mask.

Подробнее
03-01-2019 дата публикации

SYSTEM AND METHOD FOR MEDICAL IMAGING

Номер: US20190000564A1
Принадлежит:

The present disclosure provides a system and method for generating medical images. The method utilizes a novel algorithm to co-register Cone-Beam Computed Tomography (CBCT) volumes and additional imaging modalities, such as optical or RGB-D images. 1. A medical imaging apparatus comprising:a) a Cone-Beam Computed Tomography (CBCT) imaging modality having an X-ray source and an X-ray detector configured to generate a series of image data for generation of a series of volumetric images, each image covering an anatomic area;b) an auxiliary imaging modality configured to generate a series of auxiliary images; andc) a processor having instructions to generate a global volumetric image based on the volumetric images and the auxiliary images.2. The apparatus of claim 1 , wherein the processor is configured to perform an image registration process comprising co-registering the volumetric images and the auxiliary images claim 1 , wherein co-registration includes stitching of non-overlapping volumetric images to generate the global volumetric image.3. The apparatus of claim 1 , wherein the auxiliary imaging modality is an optical imaging modality configured to generate optical images.4. The apparatus of claim 1 , wherein the auxiliary imaging modality is a depth imaging modality.5. The apparatus of claim 1 , wherein the depth imaging modality is a RGB-D camera.6. The apparatus of claim 1 , wherein the imaging modalities are housed in a C-arm device.7. The apparatus of claim 6 , wherein the C-arm is configured to automatically navigate.8. A method for generating an image claim 6 , the method comprising:a) generating a series of image data using a Cone-Beam Computed Tomography (CBCT) imaging modality for generation of a series of volumetric images, each image covering an anatomic area;b) generating a series of auxiliary images using an auxiliary imaging modality; andc) generating a global volumetric image based on the volumetric images and the auxiliary images, thereby ...

Подробнее
01-01-2015 дата публикации

Method and apparatus for extracting pattern from image

Номер: US20150003736A1

An apparatus includes a noise removal block for removing image noise mixed with an input image, a Modified Census Transform (MCT) block for transforming the input image from which the image noise has been removed into a current MCT coefficient, a candidate pattern extraction block for calculating a Sum of Hamming Distance (SHD) value between a current pattern window of the transformed current MCT coefficient and a reference pattern window of a previously registered template image by performing SHD calculation on the two pattern windows and extracting a candidate pattern, and a pattern detection block for scaling down the input image in a predetermined ratio when all pixels of the current scale are processed and detecting a candidate pattern having a minimum SHD value, of stored candidate patterns, as a representative pattern.

Подробнее
02-01-2020 дата публикации

Three-coordinate mapper and mapping method

Номер: US20200003546A1
Автор: Tianyi Xing, Yu Xing
Принадлежит: Siliconix Inc

A three-coordinate mapper, comprising a U-shaped chassis (11) which is formed by successively connecting a front cross-frame, a connecting frame and a rear cross-frame; a square front panel (12); a servo motor (13); a lead screw (14); one ends of four connecting rods (17) are hinged on a periphery of the nut (15); the other end of each of the four connecting rods (17) is hinged to one end of a support rod (18); a driven laser pointer (20) and a left camera (21), a right camera (22), an upper camera (23), and a lower camera (24); an intermediate camera (25) and a driving laser pointer (26); and, a plurality of auxiliary laser pointers (27). The three-coordinate mapper and the mapping method has high measurement precision and fast measurement speed.

Подробнее
02-01-2020 дата публикации

DISTANCE SENSOR INCLUDING ADJUSTABLE FOCUS IMAGING SENSOR

Номер: US20200003556A1
Автор: Kimura Akiteru
Принадлежит:

In one embodiment, a method for calculating a distance to an object includes simultaneously activating a first projection point and a second projection point of a distance sensor to collectively project a reference pattern into a field of view, activating a third projection point of the distance sensor to project a measurement pattern into the field of view, capturing an image of the field of view, wherein the object, the reference pattern, and the measurement pattern are visible in the image, calculating a distance from the distance sensor to the object based on an appearance of the measurement pattern in the image, detecting a movement of a lens of the distance sensor based on an appearance of the reference pattern in the image, and adjusting the distance as calculated based on the movement as detected. 121.-. (canceled)22. A method for calculating a distance to an object , the method comprising:projecting a reference pattern for detecting a movement of a lens to image the object into a field of view, using a projection system of a distance sensor;projecting a measurement pattern into the field of view using the projection system of the distance sensor;capturing an image of the field of view, wherein the object, the reference pattern, and the measurement pattern are visible in the image;calculating a distance from the distance sensor to the object, based on an appearance of the measurement pattern in the image and on an appearance of the reference pattern in the image.23. The method of claim 22 , wherein the projecting the reference pattern and the projecting the measurement pattern are performed simultaneously.24. The method of claim 22 , wherein the reference pattern comprises a first line claim 22 , a second line claim 22 , and a third line that are positioned parallel to each other.25. The method of claim 24 , wherein each of the first line claim 24 , the second line claim 24 , and the third line is formed from a respective series of dots.26. The method of ...

Подробнее
02-01-2020 дата публикации

TRACKING AND RANGING SYSTEM AND METHOD THEREOF

Номер: US20200003624A1
Принадлежит:

A tracking and ranging system includes a thermal sensor device, a controller, a ranging device and a transmission device. The thermal sensor device is configured to capture a thermal image. The controller analyzes the thermal image to identify the main heat source from among the heat sources displayed in the thermal image, and obtain an offset distance between the center points of the main heat source and the thermal image. The ranging device is coupled to the controller. The transmission device loads the ranging device and is coupled to the controller. The controller controls the motion of the transmission device in accordance with the offset distance to correct the offset angle between the ranging device and the object corresponding to the main heat source. After correcting the offset angle, the ranging device detects a first distance to the object by transmitting energy and receiving reflected energy. 1. A tracking and ranging system , comprising:a thermal sensor device, capturing a thermal image;a controller, analyzing the thermal image to identify a main heat source from heat sources displayed in the thermal image, and obtaining an offset distance between a center point of the main heat source and the thermal image;a ranging device, coupled to the controller;a transmission device, loading the ranging device and coupled to the controller;wherein the controller controls motion of the transmission device in accordance with the offset distance to correct an offset angle between the ranging device and an object corresponding to the main heat source, and after correcting the offset angle, the ranging device detects a first distance from the tracking and ranging system to the object by transmitting energy and receiving reflected energy.2. The tracking and ranging system as claimed in claim 1 , wherein the ranging device detects the first distance using a supersonic transmitter transmitting a supersonic signal and a supersonic receiver receiving a reflected supersonic ...

Подробнее
04-01-2018 дата публикации

ENDOSCOPE WITH DISTANCE MEASUREMENT FUNCTION AND DISTANCE MEASUREMENT METHOD USED IN SAME

Номер: US20180003943A1
Автор: CHAN Chih-Chun
Принадлежит:

An endoscopic distance measurement method, which causes a single wavelength light source in an observation unit at a front end of a flexible tube of an endoscope to emit a predetermined wavelength light to an object to be measured via a diffraction grating so as to form a zero-order bright spot, a positive first-order bright spot and a negative first-order bright spot on the surface of the object through optical diffraction, and then capture an image from the object, and then calculate a distance magnification using a first arithmetic logic, and then to calculate the actual distance between two adjacent bright spots of the predetermined wavelength light being projected on the object using a second arithmetic logic and then to calculate the distance between the diffraction grating and the zero-order bright spot using a third arithmetic logic. 1. An endoscope , comprising:a main unit, an observation unit and a flexible tube coupled between said main unit and said observation unit, said observation unit comprises a base tube, a single wavelength light source, a diffraction grating, an image acquisition unit and a shading baffle, said single wavelength light source, said diffraction grating, said image acquisition unit and said shading baffle being respectively mounted within said base tube, wherein:said base tube defining an opening in a front side thereof;said single wavelength light source being mounted in said base tube and adapted for emitting a single wavelength light of a predetermined wavelength forwardly through said opening;said diffraction grating comprising a plurality of slots, said diffraction grating being mounted in said base tube between said single wavelength light source and said opening and adapted for diffracting the said single wavelength light and causing the diffracted said single wavelength light to be projected through the said opening onto an object to show a zero-order bright spot, a positive first-order bright spot at one lateral side ...

Подробнее
02-01-2020 дата публикации

LED PATTERN PROJECTOR FOR 3D CAMERA PLATFORMS

Номер: US20200004126A1
Принадлежит: Intel Corporation

A light pattern projector with a pattern mask to spatially modulate an intensity of a wideband illumination source, such as an LED, and a projector lens to reimage the spatially modulated emission onto regions of a scene that is to be captured with an image sensor. The projector lens may comprise a microlens array (MLA) including a first lenslet to reimage the spatially modulated emission onto a first portion of a scene, and a second lenslet to reimage the spatially modulated emission onto a first portion of a scene. The MLA may have a fly's eye architecture with convex curvature over a diameter of the projector lens in addition to the lenslet curvature. The pattern mask may be an amplitude mask comprising a mask pattern of high and low amplitude transmittance regions. In the alternative, the pattern mask may be a phase mask, such as a refractive or diffractive mask. 1. A camera platform , comprising:an image sensor array to collect light from a scene within a field of view of the image sensor array; and a light source;', 'a mask to spatially modulate an intensity of emissions from the light source; and', 'a microlens array (MLA) to project a same spatially modulated emission onto multiple portions of the scene., 'a light projector to cast a light pattern upon the scene, wherein the light projector further comprises2. The camera platform of claim 1 , wherein the MLA comprises a first lenslet to reimage the spatially modulated emission onto a first portion of the scene claim 1 , and the second lenslet is to reimage the spatially modulated emission onto a second portion of the scene3. The camera platform of claim 2 , wherein:the light source comprises a first light emitter;a first sensor pixel line is to integrate photocharge for a first portion of the scene over a first time increment;a second sensor pixel line is to integrate photocharge for a second portion of the scene over a second time increment; andthe light projector is to illuminate the first portion and the ...

Подробнее
02-01-2020 дата публикации

Photographing system for viewing 3d images with naked eyes and using method thereof

Номер: US20200004131A1
Автор: Tianyi Xing, Yu Xing
Принадлежит: Siliconix Inc

An image photographing system for viewing 3D images with naked eyes and using method thereof, including an L-shaped frame (1), wherein, a top end of a vertical portion of the L-shaped frame successively at equal intervals providing with: an intermediate photographing mechanism, a left photographing mechanism and a right photographing mechanism, a left driving mechanism and a right driving mechanism, a guide post, a vertical driving servo motor, a driving lead screw; a support plate drives the left driving mechanism and the right driving mechanism to move up and down by moving up and down along the guide post under the drive of the driving lead screw, so as to drive the left photographing mechanism or the right photographing mechanism to swing up and down. The present invention can see the realistic and natural 3D images with naked eyes.

Подробнее
03-01-2019 дата публикации

High resolution 3d point clouds generation based on cnn and crf models

Номер: US20190004535A1
Принадлежит: Baidu USA LLC

In one embodiment, a method or system generates a high resolution 3-D point cloud to operate an autonomous driving vehicle (ADV) from a low resolution 3-D point cloud and camera-captured image(s). The system receives a first image captured by a camera for a driving environment. The system receives a second image representing a first depth map of a first point cloud corresponding to the driving environment. The system determines a second depth map by applying a convolutional neural network model to the first image. The system generates a third depth map by applying a conditional random fields model to the first image, the second image and the second depth map, the third depth map having a higher resolution than the first depth map such that the third depth map represents a second point cloud perceiving the driving environment surrounding the ADV.

Подробнее
07-01-2021 дата публикации

SYSTEMS AND METHODS FOR SEMI-SUPERVISED DEPTH ESTIMATION ACCORDING TO AN ARBITRARY CAMERA

Номер: US20210004974A1
Принадлежит:

System, methods, and other embodiments described herein relate to semi-supervised training of a depth model using a neural camera model that is independent of a camera type. In one embodiment, a method includes acquiring training data including at least a pair of training images and depth data associated with the training images. The method includes training the depth model using the training data to generate a self-supervised loss from the pair of training images and a supervised loss from the depth data. Training the depth model includes learning the camera type by generating, using a ray surface model, a ray surface that approximates an image character of the training images as produced by a camera having the camera type. The method includes providing the depth model to infer depths from monocular images in a device. 1. A depth system for semi-supervised training of a depth model using a neural camera model that is independent of a camera type , comprising:one or more processors;a memory communicably coupled to the one or more processors and storing:a network module including instructions that, when executed by the one or more processors, cause the one or more processors to acquire training data including at least a pair of training images derived from a monocular video and depth data associated with at least one of the training images; anda training module including instructions that, when executed by the one or more processors, cause the one or more processors to train the depth model using the training data to generate a self-supervised loss from the pair of training images and a supervised loss from the depth data, wherein the training module includes instructions to train the depth model including instructions to learn the camera type by generating, using a ray surface model, a ray surface that approximates an image character of the training images as produced by a camera having the camera type, andwherein the training module includes instructions to provide ...

Подробнее
07-01-2021 дата публикации

REAL TIME CALIBRATION FOR TIME-OF-FLIGHT DEPTH MEASUREMENT

Номер: US20210004975A1
Принадлежит: Magic Leap, Inc.

A method for determining a distance to a target object includes transmitting light pulses to illuminate the target object and sensing, in a first region of a light-sensitive pixel array, light provided from an optical feedback device that receives a portion of the transmitted light pulses. The feedback optical device includes a preset reference depth. The method includes calibrating time-of-flight (TOF) depth measurement reference information based on the sensed light in the first region of the pixel array. The method further includes sensing, in a second region of the light-sensitive pixel array, light reflected from the target object from the transmitted light pulses. The distance of the target object is determined based on the sensed reflected light and the calibrated TOF measurement reference information. 1. (canceled)2. A time-of-flight (TOF) imaging system , comprising:an illuminator to transmit light pulses to illuminate a target object for determining a distance to the target object;an image sensor having a light-sensitive pixel array to receive optical signals from the light pulses, the pixel array including an active region and a feedback region; andan optical feedback device for directing a portion of the light from the illuminator to the feedback region of the pixel array, the optical feedback device including a preset reference depth;wherein the imaging system is configured to: transmit a group of calibration light pulses to illuminate the target object;', 'sense, in the feedback region of the pixel array, light from the optical feedback device, using a sequence of calibration shutter windows characterized by delay times representing a range of depth; and', 'calibrate TOF depth measurement reference information based on the sensed light in the feedback region of the pixel array;, 'in a calibration period,'} transmit a first measurement light pulse to illuminate the target object;', 'sense, in the active region of the light-sensitive pixel array, light ...

Подробнее
07-01-2021 дата публикации

SYSTEMS AND METHODS FOR SEMI-SUPERVISED TRAINING USING REPROJECTED DISTANCE LOSS

Номер: US20210004976A1
Принадлежит:

System, methods, and other embodiments described herein relate to training a depth model for monocular depth estimation. In one embodiment, a method includes generating, as part of training the depth model according to a supervised training stage, a depth map from a first image of a pair of training images using the depth model. The pair of training images are separate frames depicting a scene from a monocular video. The method includes generating a transformation from the first image and a second image of the pair using a pose model. The method includes computing a supervised loss based, at least in part, on reprojecting the depth map and training depth data onto an image space of the second image according to at least the transformation. The method includes updating the depth model and the pose model according to at least the supervised loss. 1. A depth system for training a depth model for monocular depth estimation , comprising:one or more processors; a network module including instructions that when executed by the one or more processors cause the one or more processors to:', 'generate, as part of training the depth model according to a supervised training stage, a depth map from a first image of a pair of training images using the depth model, wherein the pair of training images are separate frames depicting a scene from a monocular video, and wherein at least the first image includes corresponding depth data,', 'generate a transformation from the first image and a second image of the pair using a pose model, the transformation defining a relationship between the pair of training images; and', 'a training module including instructions that when executed by the one or more processors cause the one or more processors to compute a supervised loss based, at least in part, on reprojecting the depth map and the depth data onto an image space of the second image according to at least the transformation, and update the depth model and the pose model according to at ...

Подробнее
07-01-2021 дата публикации

INFERRING LOCATIONS OF 3D OBJECTS IN A SPATIAL ENVIRONMENT

Номер: US20210005018A1
Принадлежит:

A method for inferring a location of an object includes extracting features from sensor data obtained from a number of sensors of an autonomous vehicle and encoding the features to a number of sensor space representations. The method also reshapes the number of sensor space representations to a feature space representation corresponding to a feature space of a spatial area. The method further identifies the object based on a mapping of the features to the feature space representation. The method still further projects a representation of the identified object to a location of the feature space and controls an action of the autonomous vehicle based on projecting the representation. 1. A method for inferring a location of an object , comprising:extracting features from sensor data obtained from a plurality of sensors of an autonomous vehicle;encoding the features to a plurality of sensor space representations, each sensor space representation corresponding to a sensor feature space of one of the plurality of sensors;reshaping the plurality of sensor space representations to a feature space representation corresponding to a feature space of a spatial area;identifying the object based on a mapping of the features to the feature space representation;projecting a representation of the identified object to a location of the feature space; andcontrolling an action of the autonomous vehicle based on projecting the representation.2. The method of claim 1 , in which the object is a three-dimensional (3D) object.3. The method of claim 1 , in which projecting the representation comprises projecting a three-dimensional (3D) representation of the identified object.4. The method of claim 1 , in which the feature space is based on a layout of cells of a three-dimensional (3D) grid of cells representing the spatial area.5. The method of claim 4 , further comprising determining the layout of the cells based on at least one of a current behavior of the autonomous vehicle claim 4 , a ...

Подробнее
03-01-2019 дата публикации

METHOD AND SYSTEM FOR BIOMETRIC RECOGNITION

Номер: US20190005307A1
Автор: Hanna Keith J.
Принадлежит: Eyelock LLC

High quality, high contrast images of an iris and the face of a person are acquired in rapid succession in either sequence by a single sensor and one or more illuminators, preferably within less than one second of each other, by changing the data acquisition settings or illumination settings between each acquisition. 1. A system for acquiring iris and face biometric features , the system comprising:a depth measurement sensor configured to determine a distance of an image sensor from a subject;at least one light source; and 'acquire, at a first time instant, a first image comprising a biometric feature of a face of the subject, the face illuminated by light of a first wavelength from the at least one light source set to a first intensity magnitude suitable for acquiring the first image at the determined distance; and', 'the image sensor, the image sensor configured toacquire, at a second time instant, a second image comprising a biometric feature of an iris of the subject, the iris illuminated by light of a second wavelength from the at least one light source set to a second intensity magnitude suitable for acquiring the second image at the determined distance, wherein the second time instant is within a predetermined time period of the first time instant sufficient to link the first image and the second image to the same subject.2. The system of claim 1 , wherein the depth measurement sensor is configured to determine the distance of the image sensor from the subject by determining at least one of a diameter of the iris or a separation distance between two eyes of the subject.3. The system of claim 1 , further comprising at least one processor configured to perform biometric recognition using the first image comprising the biometric feature of the face claim 1 , and perform biometric recognition the second image comprising the biometric feature of the iris.4. The system of claim 1 , further comprising at least one processor configured to determine an orientation of ...

Подробнее
03-01-2019 дата публикации

THREE-DIMENSIONAL IMAGING USING FREQUENCY DOMAIN-BASED PROCESSING

Номер: US20190005671A1
Принадлежит:

A brightness image of a scene is converted into a corresponding frequency domain image and it is determined whether a threshold condition is satisfied for each of one or more regions of interest in the frequency domain image, the threshold condition being that the number of frequencies in the region of interest is at least as high as a threshold value. The results of the determination can be used to facilitate selection of an appropriate block matching algorithm for deriving disparity or other distance data and/or to control adjustment of an illumination source that generates structured light for the scene. 1. An imaging system comprising:an illumination source operable to generate structured light with which to illuminate a scene;a depth camera sensitive to light generated by the illumination source and operable to detect optical signals reflected by one or more objects in the scene, the depth camera being further operable to convert the detected optical signals to corresponding electrical signals representing a brightness image of the scene; transform the brightness image into a corresponding frequency domain image;', 'determine whether a threshold condition is satisfied for each of one or more regions of interest in the frequency domain image, the threshold condition being that the number of frequencies in the region of interest is at least as high as a threshold value; and', 'generate a control signal to adjust an optical power of the illumination source if it is determined that the threshold condition is satisfied for fewer than a predetermined minimum number of the one or more regions of interest., 'one or more processor units operable collectively to receive the electrical signals from the depth camera and operable collectively to2. The imaging system of wherein the threshold condition is that the number of frequencies claim 1 , which have an amplitude at least as high as a threshold amplitude claim 1 , is at least as high as the threshold value.3. The ...

Подробнее
20-01-2022 дата публикации

MEASUREMENT OF THICKNESS OF THERMAL BARRIER COATINGS USING 3D IMAGING AND SURFACE SUBTRACTION METHODS FOR OBJECTS WITH COMPLEX GEOMETRIES

Номер: US20220018651A1
Принадлежит:

Embodiments described herein relate to a non-destructive measurement device measurement device and a non-destructive measurement method for determining coating thickness of a three-dimensional (3D) object. In one embodiment, at least one first 3D image of an uncoated surface of the object and at least one second 3D image of a coated surface of the object are collected and analyzed to the determine the coating thickness of the object. 1. A method of determining a thickness of an object coating , comprising:positioning a surface of an object in a field view of the at least one image sensor system in a non-destructive measurement device, the object having one or more surfaces;reading a Quick Response (QR) code corresponding to at least one part coordinate or an alignment position of the surface in the non-destructive measurement device, the QR code disposed on the object:collecting a 3D image of the surface without chemically or physically changing the one or more surfaces of the object corresponding to the at least one part coordinate or the aligned surface of the object at the alignment position; andanalyzing the 3D.2. The method of claim 1 , wherein the surface includes an uncoated portion and a coated portion claim 1 , and the 3D image corresponds to a first surface profile of the uncoated portion and a second surface profile of the coated portion claim 1 , the analyzing the second 3D image and the first 3D image comprises:removing outliers of the first surface profile and the second surface profile;filtering the first surface profile and the second surface profile;overlapping the first surface profile and the second surface profile; andsubtracting the second surface profile from the first surface profile to obtain a thickness of a coating of the object.3. The method of claim 2 , wherein the analyzing the first 3D image and the second 3D image further comprises:mirroring the first 3D image and the second 3D image; andselecting a first area of the first 3D image and ...

Подробнее
03-01-2019 дата публикации

THREE-DIMENSIONAL MOTION OBTAINING APPARATUS AND THREE-DIMENSIONAL MOTION OBTAINING METHOD

Номер: US20190007594A1
Принадлежит:

A three-dimensional motion obtaining apparatus includes: a light source; a charge amount obtaining circuit that includes pixels and obtains, for each of the pixels, a first charge amount under a first exposure pattern and a second charge amount under a second exposure pattern having an exposure period that at least partially overlaps an exposure period of the first exposure pattern; and a processor that controls a light emission pattern for the light source, the first exposure pattern, and the second exposure pattern. The processor estimates a distance to a subject for each of the pixels on the basis of the light emission pattern and on the basis of the first charge amount and the second charge amount of each of the pixels obtained by the charge amount obtaining circuit, and estimates an optical flow for each of the pixels on the basis of the first exposure pattern, the second exposure pattern, and the first charge amount and the second charge amount obtained by the charge amount obtaining circuit. 1. A three-dimensional motion obtaining apparatus comprising:a light source;a charge amount obtaining circuit that includes pixels and obtains, for each of the pixels, a first charge amount under a first exposure pattern and a second charge amount under a second exposure pattern having an exposure period that at least partially overlaps an exposure period of the first exposure pattern; anda processor that controls a light emission pattern for the light source, the first exposure pattern, and the second exposure pattern, whereinthe processorestimates a distance to a subject for each of the pixels of the charge amount obtaining circuit on the basis of the light emission pattern and on the basis of the first charge amount and the second charge amount of each of the pixels of the charge amount obtaining circuit obtained by the charge amount obtaining circuit,estimates an optical flow for each of the pixels of the charge amount obtaining circuit on the basis of the first ...

Подробнее
02-01-2020 дата публикации

Spatiotemporal calibration of rgb-d and displacement sensors

Номер: US20200007843A1
Автор: Yanxiang Zhang
Принадлежит: Cogosprey Vision Inc

A method of calibrating an RGB-D device having an RGB-D camera and a displacement sensor includes: providing a calibration target, the calibration target including a plurality of visual markers shown in a surface of the calibration target, wherein the plurality of visual markers have known locations on the calibration target; positioning the calibration target within a field of view of the RGB-D device such that the RGB-D camera and displacement sensor capture data corresponding to the calibration target; moving the calibration target a known distance in relation to a location of the RGB-D device; receiving output data from the RGB-D device including data from the RGB-D camera and displacement sensor; aligning data from the RGB-D camera and data from the displacement sensor; calibrating a relative configuration of the RGB-D device based on aligned data output from the RGB-D device and known position information of the calibration target.

Подробнее
02-01-2020 дата публикации

IMAGING SYSTEMS WITH DEPTH DETECTION

Номер: US20200007853A1

An imaging system may include an image sensor, a lens, and layers with reflective properties, such as an infrared cut-off filter, between the lens and the image sensor. The lens may focus light from an object in a scene onto the image sensor. Some of the light directed onto the image sensor may form a first image on the image sensor. Other portions of the light directed onto the image sensor may reflect off of the image sensor and back towards the layers with reflective properties. These layers may reflect the light back onto the image sensor, forming a second image that is shifted relative to the first image. Depth mapping circuitry may compare the first and second images to determine the distance between the imaging system and the object in the scene. 1. An imaging system comprising:a lens;an image sensor comprising an array of pixels that generate image data;an at least partially reflective intermediate layer interposed between the lens and the image sensor;an infrared cut-off filter interposed between the intermediate layer and the lens, wherein light from an object passes through the lens, the infrared cut-off filter, and the intermediate layer to reach the image sensor, wherein a first portion of the light is captured by the array of pixels to generate a primary image, and wherein a second portion of the light reflects off of the image sensor towards the intermediate layer, reflects off of the intermediate layer back toward the image sensor, and is captured by the array of pixels to generate a secondary image that is different than the primary image; anddepth mapping circuitry that detects the primary image and the secondary image and that compares the primary image to the secondary image to determine a distance between the imaging system and the object.2. The imaging system defined in claim 1 , wherein the imaging system has a depth of field claim 1 , wherein the object is outside of the depth of field of the imaging system claim 1 , and wherein the primary ...

Подробнее
11-01-2018 дата публикации

LIGHT POINT IDENTIFICATION METHOD

Номер: US20180008371A1
Автор: Manus Johannes
Принадлежит: BRAINLAB AG

A data processing method performed by a computer for detecting reflections of light pulses, comprising the steps: acquiring a camera signal representing a series of camera images of a camera viewing field; detecting whether the camera signal includes one or more light mark portions within the camera viewing field possibly representing a light pulse reflection; relating the detected light mark portions in the series of camera images to a pre-defined emission pattern of the light pulses; and determining that a light mark portion is a reflected light pulse, if the light mark portion in the series of camera images matches to the pre-defined emission pattern of the light pulses. 1. A method for detecting reflections of light pulses , comprising the steps:acquiring, from a camera system, a series of camera images of a camera viewing field;detecting whether the series of camera images include one or more light mark portions within the camera viewing field, wherein the one or more light mark portions detected represent possible light pulse reflections;comparing the one or more light mark portions detected in the series of camera images to a pre-defined emission pattern of the light pulses; anddetermining that a light mark portion is a reflected light pulse, when the light mark portion detected in the series of camera images matches the pre-defined emission pattern of the light pulses.2. The method of claim 1 , wherein the series of camera images of the camera viewing field is at least one of a series of subsequent camera images taken with a pre-defined time interval between two camera images or a series of stereoscopic camera images claim 1 , or includes 3D data.3. The method of claim 1 , wherein the pre-defined emission pattern of the light pulses includes a light pulse during every n-th camera image claim 1 , wherein n is greater than or equal to 2.4. The method of claim 1 , further comprising determining a match of the light mark portion in the series of camera images to ...

Подробнее
27-01-2022 дата публикации

SURGERY ROBOT SYSTEM AND USE METHOD THEREFOR

Номер: US20220022985A1
Принадлежит:

The present invention provides a surgical robot system which includes a workstation, a robotic arm, a scanning module, a guiding module and the like. Fast registration may be completed by means of the scanning module. The system improves the speed and accuracy of registration and shortens the period of the operation. 1. A surgical robot system , comprising:a workstation, comprising a housing, a computation and control center, a display apparatus and an input device;a robotic arm, comprising a plurality of arm segments which are connected by joints;a scanning module, configured to collect information for a target space; anda guiding module, configured to guide a surgical instrument to move in a desired trajectory, whereininformation collected by the scanning module is processed by the workstation to acquire three-dimensional information of the target space.2. The system according to claim 1 , wherein the scanning module comprises an image acquiring apparatus.3. The system according to claim 1 , wherein the scanning module comprises a light emitting component and an image acquiring apparatus.4. The system according to claim 1 , wherein the scanning module comprises a projecting component and an image acquiring apparatus claim 1 , and that the projecting component may emit a specific coded image to the target space and the image acquiring apparatus collects the image claim 1 , thereby acquiring an accurate three-dimensional structure of the target space by a corresponding decoding algorithm.5. The system according to claim 4 , wherein the projecting component may also project an image to the target space; and/or the projecting component and the image acquiring apparatus have a predetermined relative spatial position relationship; and/or the projecting component comprises a light source claim 4 , a lens group claim 4 , a digital micromirror device and a control module.6. The system according to claim 1 , wherein a position of the scanning module in a coordinate system ...

Подробнее
27-01-2022 дата публикации

Simultaneous Diagnosis And Shape Estimation From A Perceptual System Derived From Range Sensors

Номер: US20220027653A1
Принадлежит:

Systems, methods, and devices for estimating a shape of an object based on sensor data and determining a presence of a fault or a failure in a perception system. A method of the disclosure includes receiving sensor data from a range sensor and calculating a current shape reconstruction of an object based on the sensor data. The method includes retrieving from memory a prior shape reconstruction of the object based on prior sensor data. The method includes calculating a quality score for the current shape reconstruction by balancing a function of resulted variances of the current shape reconstruction and a similarity between the current shape reconstruction and the prior shape reconstruction. 1. A method comprising:calculating a current shape reconstruction of an object based on sensor data;retrieving a prior shape reconstruction of the object;generating a shape point flag identifying non-faulty points in the prior shape reconstruction;generating a sensor point flag identifying non-faulty points in the sensor data; andfusing the shape point flag and the non-faulty points from the sensor point flag to compensate for uncertainties in the sensor data.2. The method of claim 1 , wherein:the sensor data comprises range sensor data received from one or more of a LIDAR (light detection and ranging) system or a radar system; andcalculating the current shape reconstruction of the object based on the sensor data comprises calculating with an implicit surface function or an explicit surface function.3. The method of claim 1 , further comprising calculating a confidence in the current shape reconstruction of the object by applying a function of resulted variances to the current shape reconstruction claim 1 , wherein the function of resulted variances defines a shape of the object as an implicit surface function or explicit surface function.4. The method of claim 3 , further comprising calculating a similarity between the current shape reconstruction of the object and the prior ...

Подробнее
27-01-2022 дата публикации

Depth Sensing Using Temporal Coding

Номер: US20220028099A1
Принадлежит:

In one embodiment, a system includes at least one projector configured to project a plurality of projected patterns, where a projected lighting characteristic of each of the projected patterns varies over a time period in accordance with an associated predetermined temporal lighting-characteristic pattern, a camera configured to capture images of detected patterns during the time period, and one or more processors configured to: determine, for each detected pattern, a detected temporal lighting-characteristic pattern based on variations in a detected lighting characteristic of the detected pattern, identify a detected pattern that corresponds to one of the projected patterns by comparing at least one of the detected temporal lighting-characteristic patterns to at least one of the temporal lighting-characteristic patterns, and compute a depth associated with the detected patterns based on the one or more of the projected patterns, the detected pattern, and a relative position between the camera and the projector. 120-. (canceled)21. A method comprising , by a computing device:capturing, during a time period, images comprising a plurality of detected patterns corresponding to reflections of a plurality of distinct projected patterns projected by a projector, wherein an intensity of each of the plurality of distinct projected patterns varies over the time period in accordance with a predetermined temporal lighting-characteristic pattern associated with the projected pattern, and a rate at which the images are captured by a camera is synchronized with a rate at which the intensity varies;determining, based on the images, a detected temporal lighting-characteristic pattern for each of the detected patterns;detecting a correspondence between a first detected pattern of the detected patterns and a first projected pattern of the projected patterns by comparing the detected temporal lighting-characteristic pattern of the first detected pattern to the predetermined temporal ...

Подробнее
11-01-2018 дата публикации

DISTANCE IMAGE ACQUISITION APPARATUS AND DISTANCE IMAGE ACQUISITION METHOD

Номер: US20180010903A1
Принадлежит: FUJIFILM Corporation

The distance image acquisition apparatus () includes a projection unit () which projects a first pattern of structured light distributed in a two-dimensional manner with respect to a subject within a distance measurement region, a light modulation unit () which spatially modulates the first pattern projected from the projection unit (), an imaging unit () which is provided in parallel with and apart from the projection unit () by a baseline length, and captures an image including the first pattern reflected from the subject within the distance measurement region, a pattern extraction unit (A) which extracts the first pattern spatially modulated by the light modulation unit () from the image captured by the imaging unit (), and a distance image acquisition unit (B) which acquires a distance image indicating a distance of the subject within the distance measurement region based on the first pattern. 1. A distance image acquisition apparatus comprising:a projection unit which projects a first pattern of structured light distributed in a two-dimensional manner with respect to a subject within a distance measurement region;a light modulation unit which spatially modulates the first pattern projected from the projection unit;an imaging unit which is provided in parallel with and apart from the projection unit by a baseline length, and captures an image including the first pattern reflected from the subject within the distance measurement region;a pattern extraction unit which extracts the first pattern spatially modulated by the light modulation unit from the image captured by the imaging unit;a distance image acquisition unit which acquires a distance image indicating a distance of the subject within the distance measurement region based on the first pattern extracted by the pattern extraction unit,wherein the light modulation unit is a digital micromirror device which has a micromirror group, into which the first pattern projected from the projection unit enters, and ...

Подробнее
11-01-2018 дата публикации

Sequential Diffractive Pattern Projection

Номер: US20180010907A1
Принадлежит: SIEMENS AG

The present disclosure relates to structured illumination. The teachings thereof may be embodied in devices for reconstruction of a three-dimensional surface of an object by means of a structured illumination for projection of measurement patterns onto the object. For example, a device may include: a projector unit for diffractive projection of a measurement pattern comprising a plurality of measurement points onto the surface; an acquisition unit for acquiring the measurement pattern from the surface; and a computer unit for reconstruction of the surface from a respective distortion of the measurement pattern. All possible positions of measurement elements are contained in the measurement pattern in repeating groups, in which a respective combination of measurement points represents a respective location in the measurement pattern.

Подробнее
14-01-2021 дата публикации

APPARATUS AND METHOD FOR DETERMINING SPECTRAL INFORMATION

Номер: US20210010930A1
Принадлежит:

Embodiments of the present invention provide an apparatus for determining spectral information of a three-dimensional object, comprising a cavity () for location in relation to the object, an imaging light source () located in relation to the cavity, wherein the imaging source is controllable to selectively emit light in a plurality of wavelength ranges, structured light source () for emitting structured illumination toward the object, wherein the structured light source comprises a plurality of illumination devices arranged around the cavity, one or more imaging devices () for generating image data relating to at least a portion of the object, a control unit, wherein the control unit () is arranged to control the structured light source to emit the structured illumination and to control the imaging light source to emit light in a selected one or more of the plurality of wavelength ranges, a data storage unit () arranged to store image data corresponding to the structured illumination and each of the selected one or more of the plurality of wavelength ranges, and processing means () arranged to determine depth information relating to at least a portion of the object in dependence on the image data corresponding to the structured illumination stored in the data storage means. 1. An apparatus for determining spectral information of a three-dimensional object , comprising:a cavity for location in relation to the object;an imaging light source located in relation to the cavity, wherein the imaging source is controllable to selectively emit light in a plurality of wavelength ranges;a structured light source for emitting structured illumination toward the object, wherein the structured light source comprises a plurality of illumination devices arranged around the cavity;one or more imaging devices for generating image data relating to at least a portion of the object;a control unit, wherein the control unit is arranged to control the structured light source to emit the ...

Подробнее
14-01-2021 дата публикации

SYSTEMS AND METHODS FOR MACHINE PERCEPTION

Номер: US20210011154A1
Автор: Smits Gerard Dirk
Принадлежит:

A system to determine a position of one or more objects includes a transmitter to emit a beam of photons to sequentially illuminate regions of one or more objects; multiple cameras that are spaced-apart with each camera having an array of pixels to detect photons; and one or more processor devices that execute stored instructions to perform actions of a method, including: directing the transmitter to sequentially illuminate regions of one or more objects with the beam of photons; for each of the regions, receiving, from the cameras, an array position of each pixel that detected photons of the beam reflected or scattered by the region of the one or more objects; and, for each of the regions detected by the cameras, determining a position of the regions using the received array positions of the pixels that detected the photons of the beam reflected or scattered by that region. 1a transmitter that emits a beam of photons to sequentially, illuminate for a predetermined period of time a plurality of regions that include the one or more objects;a plurality of cameras to detect one or more reflected or scattered beams of photons for each voxel in the plurality of regions, wherein each camera comprises an array of pixels, and wherein each voxel is a sampled surface element of a three-dimensional shaped surface of the plurality of regions;one or more appendages that are employed to space a position of one or more of the plurality of cameras away from each of the other cameras;one or more memory devices that store instructions; and directing the transmitter to sequentially illuminate each voxel in the plurality of regions with the beam of photons;', 'receiving, from the plurality of cameras, an array position of each pixel in the plurality of cameras that detects the beam of photons reflected or scattered for each voxel for one or more of the plurality of objects; and', 'determining each position of the one or more objects using the received array positions of the array of ...

Подробнее
09-01-2020 дата публикации

3D SENSOR AND METHOD OF MONITORING A MONITORED ZONE

Номер: US20200011656A1
Принадлежит:

A 3D sensor for monitoring a monitored zone is provided, wherein the 3D sensor has at least one light receiver for generating a received signal from received light from the monitored zone and has a control and evaluation unit that is configured to detect objects in the monitored zone by evaluating the received signal and to determine the shortest distance of the detected objects from at least one reference volume, and to read at least one distance calculated in advance from the reference value from a memory for the determination of the respective shortest distance of a detected object. 1. A 3D sensor for monitoring a monitored zone , wherein the 3D sensor comprises:at least one light receiver for generating a received signal from received light from the monitored zone anda control and evaluation unit having a memory, the control and evaluation unit detecting objects in the monitored zone by evaluating the received signal, determining the shortest distance of the detected objects from at least one reference volume, and reading at least one distance calculated in advance from the reference volume from a memory for the determination of the respective shortest distance of a detected object.2. The 3D sensor in accordance with claim 1 ,wherein the 3D sensor is a 3D camera.3. The 3D sensor in accordance with claim 1 ,wherein the reference volume is a hazard zone that secures a machine.4. The 3D sensor in accordance with claim 1 ,wherein the control and evaluation unit is implemented as an embedded system.5. The 3D sensor in accordance with claim 1 ,wherein the control and evaluation unit reads a distance calculated in advance from an intermediate reference zone from the memory and determines the shortest distance from it.6. The 3D sensor in accordance with claim 1 ,wherein the control and evaluation unit calculates shortest distances from the reference volume for different regions of the monitored zone and stores them in the memory.7. The 3D sensor in accordance with claim ...

Подробнее
10-01-2019 дата публикации

Near Touch Interaction

Номер: US20190012041A1
Автор: El Dokor Tarek
Принадлежит:

A near-touch interface is provided that utilizes stereo cameras and a series of targeted structured light tessellations, emanating from the screen as a light source and incident on objects in the field-of-view. After radial distortion from a series of wide-angle lenses is mitigated, a surface-based spatio-temporal stereo algorithm is utilized to estimate initial depth values. Once these values are calculated, a subsequent refinement step may be applied in which light source tessellations are used to flash a structure onto targeted components of the scene, where initial near-interaction disparity values have been calculated. The combination of a spherical stereo algorithm, and smoothing with structured light source tessellations, provides for a very reliable and fast near-field depth engine, and resolves issues that are associated with depth estimates for embedded solutions of this approach. 1. A system for determining near touch interaction , comprising:a display for displaying one or more elements thereon;a plurality of stereo cameras positioned adjacent to the display, at least one of the plurality of cameras being active in a visible light portion of the spectrum, and at least one of the plurality of cameras being active in a portion of the spectrum other than the visible light spectrum; anda processor for causing the display to flash one or more structured light sequences on a user pointer placed adjacent to the display, determining a depth map in accordance with information acquired by the stereo camera pair, and if desired, employing the one or more structured light sequences to possibly refine the depth map, and determining a location corresponding to one or more of the one or more elements displayed on the display to be selected in accordance with the determined depth map;wherein the at least one of the plurality of cameras that is active in the visible light portion of the spectrum operating when ambient light is determined to be sufficient for visible ...

Подробнее
09-01-2020 дата публикации

DISTANCE MEASUREMENT DEVICE

Номер: US20200011972A1
Автор: MASUDA Kozo
Принадлежит:

A distance measurement device has a light emitting unit a light receiving unit and a distance calculation unit and outputs distance data for each pixel position to the subject. A saturation detection unit detects that the light reception level in the light receiving unit is saturated. In a case in which the saturation is detected, an interpolation processing unit performs an interpolation process using the distance data of a non-saturation region close to a saturation region on the distance data of the saturation region among the distance data output from the distance calculation unit In the interpolation process, the distance data is replaced with the distance data of one pixel of the non-saturation region, or linear interpolation or curve interpolation is performed using the distance data of a plurality of pixels. 1. A distance measurement device that measures a distance to a subject by a flight time of light , the distance measurement device comprising:a light emitting unit that irradiates the subject with light generated from a light source;a light receiving unit that detects light reflected from the subject by an image sensor in which pixels are arranged in a two-dimensional shape;a distance calculation unit that calculates the distance to the subject for each pixel position from a detection signal of the light receiving unit and outputs distance data;a saturation detection unit that detects that a light reception level of the image sensor in the light receiving unit is saturated;an interpolation processing unit that performs an interpolation process using the distance data of a non-saturation region close to a saturation region on the distance data of the saturation region among the distance data output from the distance calculation unit when the saturation detection unit detects the saturation; andan image processing unit that generates a distance image of the subject on the basis of the distance data output from the interpolation processing unit.2. The ...

Подробнее
11-01-2018 дата публикации

DISTANCE IMAGE ACQUISITION APPARATUS AND DISTANCE IMAGE ACQUISITION METHOD

Номер: US20180012372A1
Принадлежит: FUJIFILM Corporation

A distance image acquisition apparatus includes a projection unit which projects a first pattern of structured light in a plurality of wavelength bandwidths, an imaging unit which is provided in parallel with and apart from the projection unit by a baseline length, performs imaging with sensitivities to a plurality of wavelength bandwidths, and generates a plurality of captured images corresponding to a plurality of wavelength bandwidths, a determination unit which determines whether or not a second pattern of structured light projected from another distance image acquisition apparatus is included in the captured images, and a pattern extraction unit which extracts the first pattern from a captured image determined as the second pattern being not included by the determination unit, and a distance image acquisition unit which acquires a distance image indicating a distance of a subject within a distance measurement region based on the first pattern. 1. A distance image acquisition apparatus comprising:a projection unit which projects a first pattern of structured light distributed in a two-dimensional manner with respect to a subject within a distance measurement region in a plurality of wavelength bandwidths;an imaging unit which is provided in parallel with and apart from the projection unit by a baseline length, performs imaging with sensitivities to the plurality of wavelength bandwidths, and generates a plurality of captured images including the first pattern reflected from the subject and corresponding to the plurality of wavelength bandwidths;a determination unit which determines whether or not a second pattern of structured light projected from another distance image acquisition apparatus is included in the captured images;a pattern extraction unit which extracts the first pattern from at least a captured image determined as the second pattern being not included by the determination unit; anda distance image acquisition unit which acquires a distance image ...

Подробнее
10-01-2019 дата публикации

APPLICATION TO DETERMINE READING/WORKING DISTANCE

Номер: US20190012784A1
Автор: Wiley William F.
Принадлежит:

A method of measuring working distance between a handheld digital device and eyes of a user, including capturing an image of at least eyes of a user via an onboard camera of the handheld digital device while the user is viewing a display of the handheld digital device and comparing an apparent angular size of a structure of the eyes or face of the user to a previously captured image of the structure of the eyes or the face that was taken in the presence of an object of known size. The method further includes calculating a working distance based on the apparent angular size of the structure of the eyes or the face; and saving at least the working distance to memory or reporting out the calculated working distance on the display. A handheld digital device programmed with an algorithm to perform the method is also included. 1. A computer implemented method of assessing reading ability of a user , comprising:presenting a reading stimulus to patient on a display of a handheld digital device;determining a working distance between the handheld digital device and eyes of the user; andsaving at least the working distance determined to memory, reporting out the determined working distance on the display or both.2. The computer implemented method as claimed in claim 1 , further comprising calibrating a processor of the handheld digital device by capturing an image of eye structures claim 1 , facial structures or both while including an object of known size in the captured image.3. The computer implemented method as claimed in claim 2 , further comprising determining a size of the eye structures claim 2 , facial structures or both relative to the object of known size in the captured image.4. The computer implemented method as claimed in claim 1 , further comprising determining the working distance directly or on the basis of comparison to the object of known size.5. The computer implemented method as claimed in claim 1 , further comprising making a record of font size of ...

Подробнее
14-01-2021 дата публикации

Depth Image Processing Method and Apparatus, and Electronic Device

Номер: US20210012516A1
Автор: KANG Jian
Принадлежит:

The present disclosure provides a depth image processing method and apparatus, and an electronic device. The method includes: acquiring a first image acquired by a depth sensor and a second image acquired by an image sensor; determining a scene type according to the first image and the second image; and performing a filtering process on the first image according to the scene type. 1. A method for depth image processing , comprising:acquiring a first image acquired by a depth sensor and a second image acquired by an image sensor;determining a scene type according to the first image and the second image; andperforming a filtering process on the first image according to the scene type.2. The method according to claim 1 , wherein determining the scene type according to the first image and the second image claim 1 , comprises:identifying a region of interest from the second image;determining a depth and a confidence coefficient of the depth corresponding to each pixel unit in the region of interest according to the first image; anddetermining the scene type according to the depth and the confidence coefficient of the depth corresponding to each pixel unit in the region of interest.3. The method according to claim 2 , wherein determining the scene type according to the depth and the confidence coefficient of the depth corresponding to each pixel unit in the region of interest claim 2 , comprises:performing statistical analysis on the depths corresponding to respective pixel units in the region of interest to obtain a depth distribution, and performing statistical analysis on the confidence coefficients to obtain a confidence coefficient distribution; anddetermining the scene type according to the depth distribution and the confidence coefficient distribution;wherein the depth distribution is configured to indicate a proportion of pixel units in each depth interval, and the confidence coefficient distribution is configured to indicate a proportion of pixel units in each ...

Подробнее
14-01-2021 дата публикации

THREE-DIMENSIONAL (3D) DEPTH IMAGING SYSTEMS AND METHODS FOR DYNAMIC CONTAINER AUTO-CONFIGURATION

Номер: US20210012522A1
Принадлежит:

Three-dimensional (3D) depth imaging systems and methods are disclosed for dynamic container auto-configuration. A 3D-depth camera captures 3D image data of a shipping container located in a predefined search space during a shipping container loading session. An auto-configuration application determines a representative container point cloud and (a) loads an initial pre-configuration file that defines a digital bounding box having dimensions representative of the predefined search space and an initial front board area; (b) applies the digital bounding box to the container point cloud to remove front board interference data from the container point cloud based on the initial front board area; (c) generates a refined front board area based on the shipping container type; (d) generates an adjusted digital bounding box based on the refined front board area; and (e) generates an auto-configuration result comprising the adjusted digital bounding box containing at least a portion of the container point cloud. 1. A three-dimensional (3D) depth imaging system for dynamic container auto-configuration , the 3D depth imaging system comprising:a 3D-depth camera configured to capture 3D image data, the 3D-depth camera oriented in a direction to capture 3D image data of a shipping container located in a predefined search space during a shipping container loading session, the shipping container having a shipping container type; anda container auto-configuration application (app) configured to execute on one or more processors and to receive the 3D image data, the container auto-configuration app configured to determine, based on the 3D image data, a container point cloud representative of the shipping container,wherein the container auto-configuration app is further configured to execute on the one or more processors to:(a) load an initial a pre-configuration file corresponding to the predefined search space, the pre-configuration file defining a digital bounding box having ...

Подробнее
09-01-2020 дата публикации

IMAGE DEVICE FOR GENERATING VELOCITY MAPS

Номер: US20200013173A1
Автор: Lee Chi-Feng
Принадлежит:

An embodiment of the present invention provides an image device for generating velocity maps. The image device includes an image capturing group, a depth map generator, an optical flow generator, and a velocity map generator. The image capturing group includes at least one image capturer, each image capturer of the image capturing group captures a first image at a first time and a second image at a second time. The depth map generator generates a first depth map according to the first image and a second depth map according to the second image. The optical flow generator generates first optical flow according to the first image and the second image. The velocity map generator generates a first velocity map according to the first depth map, the second depth map and the first optical flow, wherein the first velocity map corresponds to the first image. 1. An image device for generating velocity maps , comprising:an image capturing group comprising at least one image capturer, each image capturer of the image capturing group captures a first image at a first time and a second image at a second time;a depth map generator coupled to the image capturing group for generating a first depth map according to the first image and a second depth map according to the second image;an optical flow generator generating first optical flow according to the first image and the second image; anda velocity map generator coupled to the depth map generator and the optical flow generator for generating a first velocity map according to the first depth map, the second depth map and the first optical flow, wherein the first velocity map corresponds to the first image, and the first velocity map comprises information corresponding to a velocity of each pixel of the first image.2. The image device of claim 1 , wherein the first optical flow is used for indicating a vector of the each pixel of the first image from the second time to the first time claim 1 , and the second time is before the first ...

Подробнее
09-01-2020 дата публикации

INFORMATION PROCESSING APPARATUS, METHOD OF PROCESSING DISTANCE INFORMATION, AND RECORDING MEDIUM RECORDING DISTANCE INFORMATION PROCESSING PROGRAM

Номер: US20200013185A1
Автор: Yoshimura Kazuhiro
Принадлежит: FUJITSU LIMITED

An information processing apparatus, includes: a memory; and a processor coupled to the memory and configured to: generate, based on three-dimensional point cloud data indicating three-dimensional coordinates of each point on a three-dimensional object, image data in which two-dimensional coordinates of each point and a depth of each point are associated with each other; specify, as a target point, a point of the three-dimensional point cloud data corresponding to an edge pixel included in an edge portion of the image data, and specifies, as a neighbor point, a point of the three-dimensional point cloud data corresponding to a neighbor pixel of the edge pixel; and eliminate the target point based on a number of the neighbor points at which a distance to the target point is less than a predetermined distance. 1. An information processing apparatus , comprising:a memory; anda processor coupled to the memory and configured to:generate, based on three-dimensional point cloud data indicating three-dimensional coordinates of each point on a three-dimensional object, image data in which two-dimensional coordinates of each point and a depth of each point are associated with each other;specify, as a target point, a point of the three-dimensional point cloud data corresponding to an edge pixel included in an edge portion of the image data, and specifies, as a neighbor point, a point of the three-dimensional point cloud data corresponding to a neighbor pixel of the edge pixel; andeliminate the target point based on a number of the neighbor points at which a distance to the target point is less than a predetermined distance.2. The information processing apparatus according to claim 1 , wherein the processor is configured to obtain the three-dimensional point cloud data from a distance sensor claim 1 , whereinthe distance sensor generates the three-dimensional point cloud data by executing a raster scan on the three-dimensional object and obtaining the three-dimensional ...

Подробнее
03-02-2022 дата публикации

SELECTION OF ENVIRONMENT SENSORS FOR AUTONOMOUS VEHICLE MANEUVERING

Номер: US20220032913A1
Принадлежит: PACCAR INC

A method of autonomously maneuvering a vehicle using environment sensors comprises calculating first coordinate data based on information received from environment sensors mounted on a first portion of the vehicle; determining, based on the first coordinate data, a first target at a first location; determining a path to maneuver the vehicle to the first location; determining and transmitting commands to autonomously control the vehicle to maneuver to the first location; calculating second coordinate data based on information received from environment sensors mounted on a second portion of the vehicle; determining based on the second coordinate data, a second target at a second location, determining a path to maneuver the vehicle from the first location to the second location; and determining and transmitting commands to autonomously control the vehicle to maneuver from the first location to the second location. Suitably configured vehicles (e.g., tractor units) are also described. 1. A method of autonomously maneuvering a vehicle using environment sensors , the method comprising:by an autonomous driving module of the vehicle:calculating first coordinate data based on information received from a first set of one or more environment sensors mounted on a first portion of the vehicle;determining, based at least in part on the first coordinate data, a first target at a first location;determining a path to maneuver the vehicle to the first location;determining first commands to components of the vehicle to autonomously control the vehicle to maneuver along the determined path to the first location,transmitting the first commands to the components of the vehicle;calculating second coordinate data based on information received from a second set of one or more environment sensors mounted on a second portion of the vehicle that differs from the first portion of the vehicle;determining based at least in part on the second coordinate data, a second target at a second location; ...

Подробнее
09-01-2020 дата публикации

Projector, electronic device having projector and associated manufacturing method

Номер: US20200014172A1
Принадлежит: Himax Technologies Ltd

The present invention provides a projector including a substrate, a laser module and a lens module. The laser module is positioned on the substrate, and a laser diode of the laser module is not packaged within a can. The lens module is arranged for receiving a laser beam from the laser diode of the laser module to generate a projected image of the projector.

Подробнее
03-02-2022 дата публикации

Facing and Quality Control in Microtomy

Номер: US20220034768A1
Принадлежит:

The present disclosure also relates to systems and methods for quality control in histology systems. In some embodiments, a method is provided that includes receiving a tissue block comprising a tissue sample embedded in an embedding material, imaging the tissue block to create a first imaging data of the tissue sample in a tissue section on the tissue block, removing the tissue section from the tissue block, the tissue section comprising a part of the tissue sample, imaging the tissue section to create a second imaging data of the tissue sample in the tissue section, and comparing the first imaging data to the second imaging data to confirm correspondence in the tissue sample in the first imaging data and the second imaging data based on one or more quality control parameters. 1. A method for quality control in histology system comprising:receiving a tissue block comprising a tissue sample embedded in an embedding material;imaging the tissue block, prior to removing one or more tissue sections, to generate a baseline imaging data of the tissue sample;imaging the tissue block to create a first imaging data of the tissue sample in a tissue section of the one or more tissue sections on the tissue block;removing the tissue section from the tissue block, the tissue section comprising a part of the tissue sample;imaging the tissue section to create a second imaging data of the tissue sample in the tissue section;comparing the first imaging data to the second imaging data to confirm correspondence in the tissue sample in the first imaging data and the second imaging data based on one or more quality control parameter; andcomparing the first imaging data, the second imaging data or both to the baseline imaging data.2. The method of claim 1 , wherein the tissue section is non-confirming if there is no correspondence in one or more quality control parameters in the tissue sample in the first imaging data and the second imaging data.3. The method of claim 2 , wherein the one ...

Подробнее
03-02-2022 дата публикации

Facing and Quality Control in Microtomy

Номер: US20220034769A1
Принадлежит:

The present disclosure relates to systems and methods for tracking and printing within a histology system. In some embodiments, a system is provided that includes an information reader configured to read identifying data associated with a tissue block, a microtome configured to cut one or more tissue sections from the tissue block, one or more slides for receiving the one or more tissue sections, and a printer configured to receive the identifying data and print, after the one or more tissue sections are cut from the tissue block, one or more labels for the one or more slides, the one or more labels comprising information associating the one more tissue sections on the one or more slides with the tissue block. 1. A system comprising:an information reader configured to read identifying data associated with a tissue block;a microtome configured to cut one or more tissue sections from the tissue block;one or more slides for receiving the one or more tissue sections;a printer configured to receive the identifying data and print, after the one or more tissue sections are cut from the tissue block, one or more labels for the one or more slides, the one or more labels comprising information associating the one more tissue sections on the one or more slides with the tissue block; anda visualization system configured to track the one or more tissue sections from the microtome to the one or more slides as the one or more tissue sections are being transferred from the microtome to the one or more slides.2. The system of claim 1 , wherein the visualization system is configured to make a comparison between the one or more tissue sections on the one or more slides with one or more images of the tissue block or the image of the section on a transfer medium.3. The system of claim 2 , wherein the visualization system is configured to make a comparison between the one or more tissue sections on the one or more slides claim 2 , on the tissue block or the transfer medium with a ...

Подробнее
03-02-2022 дата публикации

SYSTEMS, METHODS, AND MEDIA FOR DIRECTLY RECOVERING PLANAR SURFACES IN A SCENE USING STRUCTURED LIGHT

Номер: US20220036118A1
Автор: Gupta Mohit, Lee Jongho
Принадлежит: WISCONSIN ALUMNI RESEARCH FOUNDATION

In accordance with some embodiments, systems, methods and media for directly recovering planar surfaces in a scene using structured light are provided. In some embodiments, a system comprises: a light source; an image sensor; a processor programmed to: cause the light source to emit a pattern comprising a pattern feature with two line segments that intersect on an epipolar line; cause the image sensor to capture an image including the pattern; identify an image feature in the image, the image feature comprising two intersecting line segments that intersect at a point in the image that corresponds to the first epipolar line; estimate a plane hypothesis associated with the pattern feature based on properties of the pattern feature and properties of the image feature, the plane hypothesis associated with a set of parameters characterizing a plane; and identify a planar surface in the scene based on the plane hypothesis. 1. A system for recovering planes in a scene , the system comprising:a light source;an image sensor comprising an array of pixels; cause the light source to emit a two-dimensional light pattern toward the scene, wherein the two-dimensional light pattern comprises a pattern feature that is disposed on a first epipolar line, the pattern feature comprising two intersecting line segments that intersect at a point that is located on the first epipolar line;', 'cause the image sensor to capture an image of the scene including at least a portion of the light pattern;', 'identify an image feature in the image, the image feature comprising two intersecting line segments that intersect at a point in the image that corresponds to the first epipolar line;', 'identify at least the pattern feature as potentially corresponding to the image feature based on the image feature and the pattern feature both being associated with the first epipolar line;', 'estimate a plane hypothesis associated with the pattern feature based on properties of the pattern feature and ...

Подробнее
18-01-2018 дата публикации

A DETECTION SYSTEM FOR DETECTING AND DETERMINING AN INTEGRITY OF PHARMACEUTICAL/PARAPHARMACEUTICAL ARTICLES

Номер: US20180017506A1
Автор: MONTI Giuseppe
Принадлежит:

A detection system (S) for detecting and determining an integrity of pharmaceutical/parapharmaceutical articles includes a conveyor device (), for conveying and advancing articles having an advancement section () along which the articles are advanced on a flat plane, in a line one after another in an advancement direction (A). The system (S) has a processor (E) for data processing; at least one colour matrix video camera () for acquiring images of the articles advancing along the advancement section, a laser projector (P) able to emit and project a laser beam (L) so that the laser beam (L) crosses the advancement section () and a high-speed linear three-dimensional video camera () for acquiring the images of the cut profiles of the articles crossing the laser beam. 1. A detection system for detecting and determining an integrity of pharmaceutical/parapharmaceutical articles , comprising:a conveyor device, for conveying and advancing articles, the conveyor device comprising an advancement section in which the articles are advanced on a flat plane, in a line one after another in an advancement direction;a processor for data processing;at least a colour matrix video camera connected to the processor and arranged above the advancement section of the conveyor device and arranged so that the lens thereof is facing towards the advancement section so that the visual field thereof for capturing the images is at the advancement section, along which the articles are advanced in a line, one after another, so that the colour matrix video camera can capture a series of images of an outline and a colour of each article, during the advancement thereof along the advancement section across the visual field of the colour matrix video camera, the processor being programmed to receive the images captured by the colour matrix video camera and to process the images so as to supply, for each article, a first datum relative to the effective shape of the outline and colour of the article;a ...

Подробнее
03-02-2022 дата публикации

DYNAMIC STRUCTURED LIGHT FOR DEPTH SENSING SYSTEMS BASED ON CONTRAST IN A LOCAL AREA

Номер: US20220036571A1
Принадлежит:

A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements. 1. A depth camera assembly (DCA) comprising:a structured light (SL) projector configured to project one or more SL patterns into a local area in accordance with illumination instructions;a camera assembly configured to capture images of a portion of the local area including the one or more SL patterns; and provide first illumination instructions to the SL projector, wherein the first illumination instructions cause the SL projector to project a first SL pattern having a first pattern element density into a first region of interest based on a first amount of contrast in the first region of interest; and', 'provide second illumination instructions to the SL projector, wherein the second illumination instructions cause the SL projector to project a second SL pattern having a second pattern element density into a second region of interest based on a second amount of contrast in the second region of interest, wherein the second amount of contrast is greater than the first amount of contrast., 'a controller configured to2. The DCA of claim 1 , ...

Подробнее
17-01-2019 дата публикации

DETECTION DEVICE, DETECTION SYSTEM, DETECTION METHOD, AND STORAGE MEDIUM

Номер: US20190017812A1
Принадлежит: NIKON CORPORATION

A detection device includes: a detector that detects an object from a first viewpoint; an information calculator that calculates first model information including shape information on the object from the first viewpoint by using detection results of the detector; a light source calculator that calculates light source information on the light source by using a first taken image obtained by imaging a space including a light source that irradiates the object with illumination light and including the object; and a position calculator that calculates a positional relation between the first viewpoint and the object by using the light source information as information used to integrate the first model information and second model information including shape information obtained by detecting the object from a second viewpoint different from the first viewpoint. 1. A detection device , comprising:a detector that detects an object from a first viewpoint;an information calculator that calculates first model information including shape information on the object from the first viewpoint by using detection results of the detector;a light source calculator that calculates light source information on the light source by using a first taken image obtained by imaging a space including a light source that irradiates the object with illumination light and including the object; anda position calculator that calculates a positional relation between the first viewpoint and the object by using the light source information as information used to integrate the first model information and second model information including shape information obtained by detecting the object from a second viewpoint different from the first viewpoint.2. The detection device according to claim 1 , wherein the position calculator calculates the positional relation between the first viewpoint and the object by using an irradiation direction of the illumination light with respect to the object as the light source ...

Подробнее
17-01-2019 дата публикации

DEFORMATION PROCESSING SUPPORT SYSTEM AND DEFORMATION PROCESSING SUPPORT METHOD

Номер: US20190017815A1
Принадлежит: KAWASAKI JUKOGYO KABUSHIKI KAISHA

A deformation processing system acquires target shape data of a work including a reference line, acquires intermediate shape data from the work having an intermediate shape having a reference line drawn thereon, puts these two pieces of data side by side by positioning the reference lines relative to each other, and calculates a necessary deformation amount of the work based on the difference between the two pieces of data put side by side. 1. A deformation processing support system that calculates a necessary deformation amount necessary for deforming from an intermediate shape to a target shape of a work based on a difference between the intermediate shape and the target shape thereof in deformation processing of the work , the deformation processing support system , comprising:a target shape data acquiring part that acquires target shape data of the work whose surface has a reference line disposed thereon;an intermediate shape data acquiring part that acquires intermediate shape data from the work having the intermediate shape whose surface has a reference line drawn thereon, during the deformation processing; anda necessary deformation amount calculating part that puts the target shape data and the intermediate shape data side by side by positioning the reference lines relative to each other, to calculate the necessary deformation amount for each of plural positions on the work based on a difference between the target shape data and the intermediate shape data put side by side.2. The deformation processing support system according to claim 1 , whereina first and a second reference lines are disposed on each of the surface of the target shape data and the surface of the work, and whereinthe necessary deformation amount calculating part positions the first reference lines relative to each other and the second reference lines relative to each other, of the target shape data and the intermediate shape data such that the first reference line on the target shape data ...

Подробнее
03-02-2022 дата публикации

MULTI-SENSOR DATA FUSION-BASED AIRCRAFT DETECTION, TRACKING, AND DOCKING

Номер: US20220036750A1
Принадлежит:

Tracking aircraft in and near a ramp area is described herein. One method includes receiving camera image data of an aircraft while the aircraft is approaching or in the ramp area, receiving LIDAR/Radar sensor data of an aircraft while the aircraft is approaching or in the ramp area, merging the camera image data and the LIDAR/Radar sensor data into a merged data set, and wherein the merged data set includes at least one of: data for determining the position and orientation of the aircraft relative to the position and orientation of the ramp area, data for determining speed of the aircraft, data for determining direction of the aircraft, data for determining proximity of the aircraft to a particular object within the ramp area, and data for forming a three dimensional virtual model of at least a portion of the aircraft from the merged data. 1. A method for tracking an aircraft , comprising:receiving camera image data of an aircraft while the aircraft is approaching or in the ramp area;receiving LIDAR/Radar sensor data of an aircraft while the aircraft is approaching or in the ramp area;merging the camera image data and the LIDAR/Radar sensor data into a merged data set; andwherein the merged data set includes at least one of: data for determining the position and orientation of the aircraft relative to the position and orientation of the ramp area, data for determining speed of the aircraft, data for determining direction of the aircraft, data for determining proximity of the aircraft to a particular object within the ramp area, and data for forming a three dimensional virtual model of at least a portion of the aircraft from the merged data.2. The method of claim 1 , wherein the method includes:comparing the three dimensional virtual model to one or more aircraft reference models stored in memory of a computing device to find a match between the three dimensional virtual model and one of the aircraft reference models to determine an aircraft type.3. The method of ...

Подробнее
17-01-2019 дата публикации

OPTICAL PROJECTOR HAVING SWITCHABLE LIGHT EMISSION PATTERNS

Номер: US20190018137A1
Принадлежит: Microsoft Technology Licensing, LLC

An optical projector comprises a collimated light source, a pattern generating optical element, and a variable optical element positioned optically between the collimated light source and the pattern generating optical element. The variable optical element is configured to adjust a divergence of a light beam incident on the pattern generating optical element. The pattern generating optical element is configured to emit patterned light when the variable optical element is in a first state, and to emit non-patterned light when the variable optical element is in a second state. 1. An optical projector , comprising:a light source;a pattern generating optical element; anda variable optical element positioned optically between the light source and the pattern generating optical element, the variable optical element configured to adjust a divergence of a light beam incident on the pattern generating optical element, and wherein the pattern generating optical element is configured to emit non-patterned light when the variable optical element is in a first state, and to emit patterned light when the variable optical element is in a second state.2. The optical projector of claim 1 , wherein the light source is a collimated light source.3. The optical projector of claim 1 , wherein the variable optical element comprises a switchable diffuser.4. The optical projector of claim 3 , wherein the switchable diffuser includes a diffractive element positioned on a surface of a substrate layer.5. The optical projector of claim 4 , wherein the diffractive element is encapsulated in a liquid crystal layer.6. The optical projector of claim 5 , further comprising a controller configured to apply differing voltages to the liquid crystal layer during the first and second states.7. The optical projector of claim 6 , wherein a refractive index of the liquid crystal layer matches a refractive index of the substrate during the second state.8. The optical projector of claim 7 , wherein a ...

Подробнее
16-01-2020 дата публикации

ENERGY OPTIMIZED IMAGING SYSTEM WITH SYNCHRONIZED DYNAMIC CONTROL OF DIRECTABLE BEAM LIGHT SOURCE AND RECONFIGURABLY MASKED PHOTO-SENSOR

Номер: US20200018592A1
Принадлежит:

An energy optimized imaging system that includes a light source that has the ability to illuminate specific pixels in a scene, and a sensor that has the ability to capture light with specific pixels of its sensor matrix, temporally synchronized such that the sensor captures light only when the light source is illuminating pixels in the scene. 1. A system for capturing images of a scene comprising:a directable light source for illuminating a portion of the scene;a sensor configurable for capturing light from the illuminated portion of the scene; and determining a sequence of portions of the scene to be illuminated;', 'directing the sensor to sequentially capture light from the sequence of portions; and', 'directing the directable light source to sequentially illuminate the sequence of portions;, 'a controller performing the functions ofwherein the sensor is a time-of-flight sensor.2. The system of wherein the controller performs the further function of:temporally synchronizing the directable light source and the sensor such that the sensor captures light when the directable light source is illuminating the scene.3. The system of wherein the controller performs the further function of:spatially synchronizing the directable light source and the sensor such that the directable light source illuminates a portion of the scene that the sensor is configured to capture.4. The system of wherein the directable light source and the photosensor are substantially arranged in a rectified stereo configuration.5. The system of wherein the sensor measures distance based on direct or indirect time-of-flight.6. The system of wherein the sequence of portions comprises a sequence of lines of pixels of the scene.7. The system of wherein the sequence of portions includes only a subset of the lines of pixels comprising the scene.8. The system of wherein the directable light source is a scanning projector.9. The system of wherein:the scene is sensed by an array of pixels;the illuminated ...

Подробнее
21-01-2021 дата публикации

SYSTEMS AND METHODS FOR IMPROVED RADAR SCANNING COVERAGE AND EFFICIENCY

Номер: US20210018608A1
Принадлежит:

System and methods for obtaining sizing of an irregularly shaped object. The methods comprise: displaying, by a display of a handheld scanner system, a 3D optical figure of a subject; displaying a grid superimposed over the 3D optical figure of the subject, the grid comprising a plurality of marks arranged in rows and columns; generating, by a radar module of the handheld scanner system, range data specifying a sensed spacing between the radar module and at least a surface of a first portion of the irregularly shaped object which is covered by at least one item; receiving, by a processor of the handheld scanner system, the range data from the radar module as the subject is being scanned; and causing at least one first visual characteristic of a first mark of said plurality of marks to change when corresponding range data is successfully received by the processor. 1. A method for obtaining sizing of an irregularly shaped object , comprising:displaying, by a display of a handheld scanner system, a 3D optical figure of a subject;displaying a grid superimposed over the 3D optical figure of the subject, the grid comprising lines defining a plurality of cells arranged in rows and columns;providing a mark within each of the plurality of cells;generating, by a radar module of the handheld scanner system, range data specifying a sensed spacing between the radar module and at least a surface of a first portion of the irregularly shaped object which is covered by at least one item;receiving, by a processor of the handheld scanner system, the range data from the radar module as the subject is being scanned;analyzing, by the processor, the range data to identify at least one first mark of the marks provided in the plurality of cells that corresponds to a location on the subject for which radar depth data was likely successfully acquired; andcausing, by the processor, generation of an optimized radar pattern by varying at least one first visual characteristic of the first mark.2. ...

Подробнее
21-01-2021 дата публикации

Method and Apparatus for Determining Relative Motion between a Time-of-Flight Camera and an Object in a Scene Sensed by the Time-of-Flight Camera

Номер: US20210018627A1
Принадлежит:

A method for determining relative motion between a time-of-flight camera and an object in a scene sensed by the time-of-flight camera is provided. The method includes receiving at least two sets of raw images of the scene from the time-of-flight camera, each set including at least one raw image. The raw images are based on correlations of a modulated reference signal and measurement signals of the time-of-flight camera. The measurement signals are based on a modulated light signal emitted by the object. The method includes determining, for each set of raw images, a value indicating a respective phase difference between the modulated light and reference signals based on the respective set of raw images, and determining information about relative motion between the time-of-flight camera and object based on the values indicating the phase differences. The method includes outputting the information about relative motion between the time-of-flight camera and the object. 1. A method for determining relative motion between a time-of-flight camera and an object in a scene sensed by the time-of-flight camera , wherein the object emits a modulated light signal , the method comprising:receiving at least two sets of raw images of the scene from the time-of-flight camera, wherein the at least two sets of raw images each comprise at least one raw image, wherein the raw images are based on correlations of a modulated reference signal and measurement signals of the tune-of-flight camera, and wherein the measurement signals are based on the modulated light signal emitted by the object;determining, for each set of raw images, a value indicating a respective phase difference between the modulated light signal and the modulated reference signal based on the respective set of raw images;determining information about relative motion between the time-of-flight camera and the object based on the values indicating the phase differences; andoutputting the information about relative motion ...

Подробнее
17-01-2019 дата публикации

ACTIVE ILLUMINATION 3D ZONAL IMAGING SYSTEM

Номер: US20190019302A1
Принадлежит:

An active illumination range camera comprising illumination and imaging systems that is operable to provide a range image of a scene in the imaging system's field of view (FOV) by partitioning the range camera FOV into sub-FOVs, and controlling the illumination and imaging systems to sequentially illuminate and image portions of the scene located in the respective sub-FOVs. 1. An active illumination range camera operable to determine distances to features in a scene , the range camera comprising:an imaging system characterized by a field of view (FOV) and comprising a photosensor having light sensitive pixels, and an optical system configured to collect light from a scene in the FOV and image the collected light onto pixels of the photosensor;an illumination system controllable to generate and direct a field of illumination (FOI) to illuminate at least a portion of the FOV; and partition the at least a portion of the FOV into a plurality of zones;', 'control the illumination system to generate and direct a FOI to sequentially illuminate the zones in turn and thereby features of the scene within the zones;', 'based at least in part on a given zone being illuminated by light transmitted in the FOI, activate pixels in a corresponding region of the photosensor on which the imaging system images light from the features in the zone to accumulate photocharge responsive to light reflected by the features from the transmitted light, and inactivate pixels on which light from the features is not imaged; and', 'determine and use data based on the photocharge accumulated by the pixels to determine distances to features in the scene and provide a range image for the scene., 'a controller operable to2. The active illumination range camera according to wherein sequentially illuminating the zones in turn comprises illuminating each zone a plurality of times to acquire data sufficient to determine distances to features of the scene in the zone before illuminating a next zone.3. The ...

Подробнее
17-01-2019 дата публикации

Volumetric depth video recording and playback

Номер: US20190019303A1
Принадлежит:

Embodiments generally relate to a machine-implemented method of automatically adjusting the range of a depth data recording executed by at least one processing device. The method comprises determining, by the at least one processing device, at least one position of a subject to be recorded; determining, by the at least one processing device, at least one spatial range based on the position of the subject; receiving depth information; and constructing, by the at least one processing device, a depth data recording based on the received depth information limited by the at least one spatial range. 183-. (canceled)84. A machine-implemented method of combining at least a first depth data recording and a second depth data recording executed by at least one processing device , the method comprising:determining, by the processing device, a first viewing angle of a first depth data recording;determining, by the processing device, a second viewing angle of a second depth data recording; andgenerating a combined depth data recording comprising each of the first and second depth data recordings, wherein the visibility of each of the first and second depth data recordings within the combined depth data recording is determined by the processing device according to the first and second viewing angles.85. The method of claim 84 , wherein each depth data recording comprises a plurality of depth data points.86. The method of claim 85 , wherein the plurality of depth data points are arranged into at least one set of plurality of depth data points that share a common viewing angle.87. The method of claim 85 , wherein the visibility of the each of the first and second depth data recordings within the combined depth data recording is altered by the processing device adjusting the display size of at least one of the plurality of depth data points.88. The method of claim 84 , wherein generating the combined depth data recording comprises generating a virtual object by reconstructing data ...

Подробнее
03-02-2022 дата публикации

METHOD AND SYSTEM FOR HIGH-SPEED DUAL-VIEW BAND-LIMITED ILLUMINATION PROFILOMETRY

Номер: US20220038676A1
Принадлежит:

A system and a method for 3D imaging of an object, the method comprising projecting sinusoidal fringe patterns onto the object using a projecting unit and capturing fringe patterns deformed by the object, alternatively by at least a first camera and a second camera, and recovering a 3D image of the object pixel by pixel from mutually incomplete images provided by the first camera and the second camera, by locating a point in images of the second camera that matches a selected pixel of the first camera; determining estimated 3D coordinates and wrapped phase based on calibration of the cameras, determining an horizontal coordinate on the plane of a projector of the projecting unit based on calibration of the projector, and using a wrapped phase value to recover a 3D point of 3D coordinates (x, y, z). 1. A system for 3D imaging of an object , the system comprising:a projection unit; said projection unit comprising a light source and a projector; andat least two projection unit cameras, said cameras being positioned on a same side of said projector;wherein said projection unit projects sinusoidal fringe patterns onto the object and the cameras alternatively capture, point by point; fringe patterns deformed by the object, depth information being encoded into the phase of the deformed fringe patterns, and the object being recovered by phase demodulation and reconstruction.2. The system of claim 1 , wherein said light source is a high-coherent light source of a power of at least 50 mW.3. The system of claim 1 , wherein said cameras have an imaging speed of at least 2 k frames/second claim 1 , and image resolution at least 1000×800 pixels.4. The system of claim 1 , wherein said projection unit comprises a spatial light modulator claim 1 , and said spatial light modulator has a refreshing rate of at least 4 kHz and on board memory of at least 1 MB.5. The system of claim 1 , wherein said projection unit comprises a spatial light modulator claim 1 , and said spatial light ...

Подробнее
17-01-2019 дата публикации

DETECTOR DEVICE WITH MAJORITY CURRENT AND ISOLATION MEANS

Номер: US20190019821A1
Принадлежит: Sony Depthsensing Solutions SA/NV

The present disclosure relates to a detector device () assisted by majority current (), comprising a semiconductor layer of a first conductivity type (), at least two control regions of the first conductivity type (), at least one detection region of a second conductivity type () opposite to the first conductivity type and a source () for generating a majority carrier current () associated with an electrical field, wherein it further comprises isolation means () formed in the semiconductor layer and located between said two control regions, for deflecting the first majority carrier current generated by the first source between said two control regions and, hence, increasing the length of the first majority current path, reducing the amplitude of said first majority carrier current and, therefore, reducing the power consumption of the detector device. 2. The detector device according to claim 1 , wherein the isolation means comprise at least one trench isolation region.3. The detector device according to claim 1 , wherein the thickness of the semiconductor layer is adapted for back side illumination and wherein the detection region claim 1 , the control regions and the isolation means are formed in the front side of the semiconductor layer.4. The detector device according to claim 1 , further comprising a second source for generating at least one second majority carrier current in the semiconductor layer between the front side of the semiconductor layer and the backside of semiconductor layer claim 1 , said second majority carrier current being associated with a respective second electrical field claim 1 , the generated minority carriers being directed towards the front side of the semiconductor layer under the influence of the second electrical field respectively associated with the at least one second majority carrier current.5. The detector device according to claim 4 , further comprising a passivation layer formed on the backside of the semiconductor layer and ...

Подробнее
18-01-2018 дата публикации

IMAGING SYSTEM, IMAGING DEVICE, METHOD OF IMAGING, AND STORAGE MEDIUM

Номер: US20180020207A1
Принадлежит: NIKON CORPORATION

An imaging system, including: a first body; a first imager that is provided in the first body and images an object; a first information calculator that is provided in the first body and calculates first model information including at least one of shape information and texture information of the object based on an imaging result of the first imager; a pattern setter that sets a reference pattern indicating at least a part of the first model information calculated by the first information calculator; a first projector that projects the reference pattern toward the object; a second body; a second imager that is provided in the second body and images the object onto which the reference pattern is projected; a second information calculator that is provided in the second body and calculates second model information including at least one of shape information and texture information of the object based on an imaging result of the second imager; and a pattern extractor that extracts the reference pattern projected by the first projector from the imaging result of the second imager. 1. An imaging system , comprising:a first body;a first imager that is provided in the first body and images an object;a first information calculator that is provided in the first body and calculates first model information including at least one of shape information and texture information of the object based on an imaging result of the first imager;a pattern setter that sets a reference pattern indicating at least a part of the first model information calculated by the first information calculator;a first projector that projects the reference pattern toward the object;a second body;a second imager that is provided in the second body and images the object onto which the reference pattern is projected;a second information calculator that is provided in the second body and calculates second model information including at least one of shape information and texture information of the object based on ...

Подробнее
16-01-2020 дата публикации

Updating online store inventory based on physical store inventory

Номер: US20200019754A1
Принадлежит: Trax Technology Solutions Pte Ltd

A system for identifying products and tracking inventory in a retail store based on analysis of image data is provided. The system may comprise at least one processor configured to receive image data from a plurality of image sensors mounted in a retail store; analyze the image data to estimate a current inventory of at least one product type in the retail store; receive product supply information associated with the at least one product type in the retail store; determine that an online order from a virtual store made during a first time period will be fulfilled by an employee of the retail store during a second time period; determine a predicted inventory of the at least one product type during the second time period; and provide information to the virtual store regarding the predicted inventory of the at least one product type during the second time period.

Подробнее
16-01-2020 дата публикации

MULTI-FREQUENCY HIGH-PRECISION OBJECT RECOGNITION METHOD

Номер: US20200019808A1
Принадлежит:

A multi-frequency high-precision object recognition method is disclosed, wherein a multi-frequency light emitting unit is used to emits lights of different frequencies onto an object-to-be-tested, and a multi-frequency image sensor unit is used to fetch the image of lights reflected from the object-to-be-tested. In an X axis and a Y axis is a single-piece planar image, while lights of different frequencies is used to form image depth in a Z axis. The sample light in the Z axis includes two infrared light narrow range image signals, each having wavelength between 850 nm and 1050 nm, and wavelength width between 10 nm and 60 nm. Calculate to obtain a plurality of single-piece planar images in the X axis and the Y axis as sampled by different wavelength widths in the Z axis, superimpose the plurality of single-piece planar images into a 3-dimension stereoscopic relief image for precise comparison and recognition. 1. A multi-frequency high-precision object recognition method , comprising the following steps:providing a recognition hardware mechanism contained in a recognition system, the recognition hardware mechanism having at least a multi-frequency light emitting unit and at least a multi-frequency image sensor unit;irradiating lights of different frequencies emitted by the at least a multi-frequency light emitting unit onto an object-to-be-tested, the lights emitted by the multi-frequency light emitting unit contains at least two infrared lights, having their wavelength ranges between 850 nm to 1050 nm;fetching by the multi-frequency image sensor unit images of the object-to-be-tested irradiated by lights of different frequencies, such that the multi-frequency image sensor unit fetches respective narrow range image signals contained in the at least two reflected infrared lights respectively, the wavelength ranges of the narrow range image signals are between 850 nm to 1050 nm corresponding to that of the multi-frequency light emitting unit, and a wavelength width ...

Подробнее
16-01-2020 дата публикации

REDUCING TEXTURED IR PATTERNS IN STEREOSCOPIC DEPTH SENSOR IMAGING

Номер: US20200020120A1
Автор: Grunnet-Jepsen Anders
Принадлежит: Intel Corporation

Systems, devices, and techniques related to removing infrared texture patterns used for depth sensors are discussed. Such techniques may include applying a color correction transform to raw input image data including a residual infrared texture pattern to generate output image data such that the output image data has a reduced IR texture pattern residual with respect to the raw input image data. 1. (canceled)2. A device comprising:an infrared (IR) projector to project an IR texture pattern onto a scene;an image sensor comprising a plurality of sub-pixels, and a color filter array adjacent to the image sensor, the color filter array comprising red, green, and blue color filter elements, each of the color filter elements corresponding to one of the sub-pixels, and the image sensor to generate raw input image data based on an image capture of the scene comprising a projection of the IR texture pattern, wherein the raw input image data comprises an IR texture pattern residual from the IR texture pattern received at the image sensor through at least some of the red, green, and blue color filter elements;an image signal processor coupled to the image sensor, the image signal processor to receive the raw input image data and to apply a color correction transform to the raw input image data or image data corresponding to the raw input image data to generate output image data, wherein the color correction transform is to correct for the IR texture pattern residual such that the output image data has a reduced IR texture pattern residual with respect to the raw input image data.3. The device of claim 2 , wherein the color filter array consists of the red claim 2 , green claim 2 , and blue color filter elements.4. The device of claim 2 , wherein a plurality of pixel positions within the raw input image data comprise the IR texture pattern residual and the output image data having a reduced IR texture pattern residual with respect to the raw input image data comprises the raw ...

Подробнее
16-01-2020 дата публикации

HIGH SPEED STRUCTURED LIGHT SYSTEM

Номер: US20200020130A1
Принадлежит: Cognex Corporation

The present disclosure provides a high resolution structured light system that is also capable of maintaining high throughput. The high resolution structured light system includes one or more image capture devices, such as a camera and/or an image sensor, a projector, and a blurring element. The projector is configured to project a binary pattern so that the projector can operate at high throughput. The binary projection pattern is subsequently filtered by the blurring element to remove high frequency components of the binary projection pattern. This filtering smoothes out sharp edges of the binary projection pattern, thereby creating a blurred projection pattern that changes gradually from the low value to the high value. This gradual change can be used by the structured light system to resolve spatial changes in the 3D profile that could not otherwise be resolved using a binary pattern. 1. A structured light system comprising:a projector configured to generate a projection pattern comprising a first plurality of intensity levels;an optical element in a light path between the projector and a plane of interest, wherein the optical element is configured to blur the projection pattern to provide a blurred projection pattern onto the plane of interest, wherein the blurred projection pattern comprises a second plurality of intensity levels, wherein the second plurality is greater than the first plurality; andone or more image capture devices configured to capture an image of the blurred projection pattern projected onto the plane of interest.2. The structured light system of claim 1 , wherein the projection pattern comprising the first plurality of intensity levels comprises a binary projection pattern comprising a first intensity level and a second intensity level.3. The structured light system of claim 2 , wherein the binary projection pattern comprises a periodic square-wave pattern having a sharp edge between the first intensity level and the second intensity level. ...

Подробнее
21-01-2021 дата публикации

Coding Distance Topologies for Structured Light Patterns for 3D Reconstruction

Номер: US20210019896A1
Принадлежит: Cognex Corp

Methods, systems, and devices for 3D measurement and/or pattern generation are provided in accordance with various embodiments. Some embodiments include a method of pattern projection that may include projecting one or more patterns. Each pattern from the one or more patterns may include an arrangement of three or more symbols that are arranged such that for each symbol in the arrangement, a degree of similarity between said symbol and a most proximal of the remaining symbols in the arrangement is less than a degree of similarity between said symbol and a most distal of the remaining symbols in the arrangement. Some embodiments further include: illuminating an object using the one or more projected patterns; collecting one or more images of the illuminated object; and/or computing one or more 3D locations of the illuminated object based on the one or more projected patterns and the one or more collected images.

Подробнее
10-02-2022 дата публикации

DEPTH AND CONTOUR DETECTION FOR ANATOMICAL TARGETS

Номер: US20220039632A1
Принадлежит:

Techniques for detecting depth contours of an anatomical target and for enhancing imaging of the anatomical target are provided. In an example, a reference pattern of light can be projected across an anatomical target and an image of the reflected light pattern upon the anatomical target can be captured. The captured light pattern can be analyzed to determine contour information, which can then be used to provide 3D cues to enhance a 2-dimensional image of the anatomical target. 1. An image enhancement system to enhance a 2-dimensional display image , the image enhancement system comprising:a first illumination source;a first optical path configured to project a first pattern of light provided by the first illumination source at a surface of an anatomical target;a sensor configured to detect light reflected from the surface and transmit an image signal, the image signal based on the light reflected from the surface; andan imaging system configured to receive the image signal, to detect a second pattern of light reflected from the surface and to determine contour information of the surface of the anatomical target based on the second pattern of light.2. The image enhancement system of claim 1 , wherein the first illumination source is configured to generate coherent light.3. The image enhancement system of claim 1 , wherein the first illumination source is configured to generate polarized light.4. The image enhancement system of claim 3 , wherein the first optical path is configured to project at least two distinct beams of light toward the surface; andwherein the first pattern is an interference pattern of the at least two distinct beams at the surface.5. The image enhancement system of claim 4 , wherein the first illumination light source is configured to project a single beam of light; andwherein the first optical path includes a beam splitter to provide the at least two distinct beams.6. The image enhancement system of claim 1 , wherein the first optical path is ...

Подробнее
17-01-2019 дата публикации

Pixel Circuit and Image Sensing System

Номер: US20190020837A1
Автор: Lo Yen-Hung, Yang Meng-Ta
Принадлежит:

The present application provides a pixel circuit, applied in an image sensing system. The pixel circuit is coupled to a first collection node and a second collection node. The pixel circuit includes a first capacitor; a second capacitor; a first shutter switch coupled between the first capacitor and the first collection node; a second shutter switch coupled between the second capacitor and the second collection node; a third shutter switch coupled between the second capacitor and the first collection node; a fourth shutter switch coupled between the first capacitor and the second collection node; and a common mode reset module coupled to the first capacitor and the second capacitor. 1. A pixel circuit , applied in an image sensing system , wherein the pixel circuit is coupled to a first collection node and a second collection node , characterized in that , the pixel circuit comprises:a first capacitor;a second capacitor;a first shutter switch, coupled between the first capacitor and the first collection node;a second shutter switch, coupled between the second capacitor and the second collection node;a third shutter switch, coupled between the second capacitor and the first collection node;a fourth shutter switch, coupled between the first capacitor and the second collection node; anda common mode reset module, coupled to the first capacitor and the second capacitor.2. The pixel circuit as claim 1 , characterized in that claim 1 , within a first time claim 1 , the first shutter switch and the second shutter switch are conducted claim 1 , the third shutter switch and the fourth shutter switch are cutoff; and within a second time claim 1 , the first shutter switch and the second shutter switch are cutoff claim 1 , the third shutter switch and the fourth shutter switch are conducted.3. The pixel circuit as claim 2 , characterized in that claim 2 , the first time and the second time have the same time length.4. The pixel circuit as claim 1 , characterized by claim 1 , ...

Подробнее
21-01-2021 дата публикации

SYSTEM AND METHOD FOR GUIDING CARD POSITIONING USING PHONE SENSORS

Номер: US20210021305A1
Принадлежит: Capital One Services, LLC

A position alignment system facilitates positioning of a contactless card in a ‘sweet spot’ in a target volume relative to a contactless card reading device. Alignment logic uses information captured from available imaging devices such as infrared proximity detectors, cameras, infrared sensors, dot projectors, and the like to guide the card to a target location. The captured image information is processed to identify a card position, trajectory and predicted location using one or both of a machine learning model and/or a Simultaneous Localization and Mapping logic. Trajectory adjustment and prompt identification may be intelligently controlled and customized using machine-learning techniques to customize guidance based on the preference and/or historical behavior of the user. As a result, the speed and accuracy of contactless card alignment is improved and received NFC signal strength is maximized, thereby reducing the occurrence of dropped transactions. 1. A method for authenticating a transaction for a device includes the steps of:receiving a request to authenticate the device;displaying, on the device, a prompt to communicatively couple a card with the device;detecting, by a proximity sensor of the device, that the card is proximate to the device, the card storing authentication data for the device;capturing, by a camera of the device, a series of images as the card approaches the device;processing the series of images to determine a position and a trajectory of the card as the card approaches the device;detecting, based on the series of images, that the card is at a target location relative to the device;triggering an interface of the device to request an authentication cryptogram from the card when the card is detected at the target location; andreceiving the authentication cryptogram from the card in response to the request, the authentication cryptogram formed from authentication data stored by the card.2. The method of wherein the step of processing the ...

Подробнее
16-01-2020 дата публикации

CONTROL METHOD, CONTROL DEVICE AND COMPUTER DEVICE

Номер: US20200021729A1

A control method is provided. The control method includes that: brightness of a scene is acquired; light emission power of a structured light projector is determined based on the brightness of the scene; and the structured light projector is controlled to emit light at the light emission power. A control device and a computer device are also provided. 1. A control method , applied to a structured light projector , comprising:acquiring brightness of a scene;determining light emission power of the structured light projector based on the brightness of the scene; andcontrolling the structured light projector to emit light at the light emission power.2. The control method of claim 1 , wherein the brightness of the scene is detected by a light sensor.3. The control method of claim 1 , wherein acquiring the brightness of the scene comprises:acquiring a shot image of the scene; andcalculating the brightness of the scene based on the shot image of the scene.4. The control method of claim 1 , wherein the method further comprises:presetting a mapping relationship between the light emission power and multiple preset brightness ranges, wherein each piece of the light emission power corresponds to a respective one of the multiple preset brightness ranges, andwherein determining the light emission power of the structured light projector based on the brightness of the scene comprises:determining a preset brightness range within which the brightness of the scene falls; anddetermining the light emission power corresponding to the preset brightness range according to the preset mapping relationship.5. The control method of claim 1 , wherein the higher the brightness of the scene is claim 1 , the higher the light emission power is.6. The control method of claim 3 , wherein calculating the brightness of the scene based on the shot image of the scene comprises:calculating an average of pixel values of the shot image of the scene to obtain the brightness of the scene.7. A control device ...

Подробнее
21-01-2021 дата публикации

Three-dimensional computational imaging method and apparatus based on single-pixel sensor, and non-transitory computer-readable storage medium

Номер: US20210021799A1
Принадлежит: Beijing Institute of Technology BIT

The present disclosure proposes a three-dimensional computational imaging method and apparatus based on a single-pixel sensor, and a storage medium. The method includes the following. A stripe coding is combined with a two-dimensional imaging coding through a preset optical coding to generate a new optical coding, and the new optical coding is loaded into an spatial light modulator (SLM); a two-dimensional spatial information and depth information of a scene are coupled into a one-dimensional measurement value by using a single-pixel detector and the SLM loaded with the new optical coding; and the two-dimensional spatial information and the depth information of the scene are reconstructed, from the one-dimensional measurement value through a decoupling algorithm, for three-dimensional imaging.

Подробнее
28-01-2016 дата публикации

Overlapping pattern projector

Номер: US20160025993A1
Принадлежит: Apple Inc

An optoelectronic device includes a semiconductor substrate, an array of optical emitters arranged on the substrate in a two-dimensional pattern, a projection lens and a diffractive optical element (DOE). The projection lens is mounted on the semiconductor substrate and is configured to collect and focus light emitted by the optical emitters so as to project optical beams containing a light pattern corresponding to the two-dimensional pattern of the optical emitters on the substrate. The DOE is mounted on the substrate and is configured to produce and project multiple overlapping replicas of the pattern.

Подробнее
10-02-2022 дата публикации

Facing and Quality Control in Microtomy

Номер: US20220042887A1
Принадлежит:

The present disclosure relates to systems and methods for facing a tissue block. In some embodiments, a method is provided for facing a tissue block that includes imaging a tissue block to generate imaging data of the tissue block, the tissue block comprising a tissue sample embedded in an embedding material, estimating, based on the imaging data, a depth profile of the tissue block, wherein the depth profile comprises a thickness of the embedding material to be removed to expose the tissue sample to a pre-determined criteria, and removing the thickness of the embedding material to expose the tissue to the pre-determined criteria. 1. A method for facing a tissue block comprising:imaging a tissue block to generate imaging data of the tissue block, the tissue block comprising a tissue sample embedded in an embedding material;estimating, based on the imaging data, a depth profile of the tissue block, wherein the depth profile comprises a thickness of the embedding material to be removed to expose the tissue sample to a predetermined criteria; andremoving the thickness of the embedding material to expose the tissue sample to the predetermined criteria,wherein the tissue block is imaged with a structured light to determine the depth profile.2. The method of claim 1 , further comprising:progressively removing one or more sections from a tissue block comprising a tissue sample embedded in an embedding material;imaging the one or more sections to generate imaging data associated with the one or more sections; andconfirming, based on the imaging data, that the tissue sample is exposed to the predetermined criteria.3. A method for facing a tissue block comprising:illuminating the tissue block with a structured light in a UV range;imaging the tissue block, prior to removing the one or more sections, to generate a baseline imaging data of the tissue sample progressively removing one or more sections from a tissue block comprising a tissue sample embedded in an embedding ...

Подробнее
28-01-2021 дата публикации

METHODS AND SYSTEMS FOR AUTOMATICALLY ANNOTATING ITEMS BY ROBOTS

Номер: US20210023716A1
Автор: TAKAOKA Yutaka
Принадлежит: Toyota Research Institute, Inc.

A robot automatically annotates items by training a semantic segmentation module. The robot includes one or more imaging devices, and a controller comprising machine readable instructions. The machine readable instructions, when executed by one or more processors, cause the controller to capture an image with the one or more imaging devices, identify a target area in the image in response to one or more points on the image designated by a user, obtain depth information for the target area, calculate a center of an item corresponding to the target area based on the depth information, rotate the imaging device based on the center, and capture an image of the item at a different viewing angle in response to rotating the view of the imaging device. 1. A robot comprising:one or more imaging devices; and capture an image with the one or more imaging devices;', 'identify a target area in the image in response to one or more points on the image designated by a user;', 'obtain depth information for the target area;', 'calculate a center of an item corresponding to the target area based on the depth information;', 'rotate the one or more imaging devices based on the center; and', 'capture another image of the item at a different viewing angle in response to rotating the one or more imaging devices., 'a controller comprising machine readable instructions, the machine readable instructions, when executed by one or more processors, causing the controller to2. The robot of claim 1 , further comprising:an arm; anda gripping assembly rotatably coupled to the arm and comprising the one or more imaging devices,wherein the machine readable instructions, when executed by one or more processors, cause the controller to rotate the one or more imaging devices by rotating the gripping assembly.3. The robot of claim 1 , further comprising:a locomotion device configured to move the robot,wherein the machine readable instructions, when executed by one or more processors, cause the controller ...

Подробнее
28-01-2021 дата публикации

ROAD GRADIENT DETERMINING METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE

Номер: US20210024074A1
Автор: LIU CHUN

A road gradient determining method includes obtaining a three-dimensional road image formed by a two-dimensional road image of a road and laser point cloud data of the road and selecting a plurality of nodes from the three-dimensional road image as control points. The method further includes generating, according to the control points, a first spline curve indicating a road elevation and converting the first spline curve into a second spline curve indicating a road gradient. Finally, the method includes obtaining location information and determining a first road gradient according to the location information and the second spline curve. Apparatus and non-transitory computer-readable storage medium counterpart embodiments are also contemplated. 1. A road gradient determining method , applied to a computer device , the method comprising:obtaining, by processing circuitry of the computer device, a three-dimensional road image formed by a two-dimensional road image of a road and laser point cloud data of the road;selecting, by the processing circuitry of the computer device, a plurality of nodes from the three-dimensional road image as control points;generating, by the processing circuitry of the computer device, according to the control points, a first spline curve indicating a road elevation;converting, by the processing circuitry of the computer device, the first spline curve into a second spline curve indicating a road gradient;obtaining, by the processing circuitry of the computer device, location information; anddetermining, by the processing circuitry of the computer device, a first road gradient according to the location information and the second spline curve.2. The method according to claim 1 , wherein the obtaining the three-dimensional road image comprises:obtaining the two-dimensional road image and the laser point cloud data of the road;selecting, from the laser point cloud data, a laser point closest to a point on the two-dimensional road image;obtaining ...

Подробнее
24-01-2019 дата публикации

Apparatus for Three-Dimensional Measurement of an Object, Method and Computer Program with Image-based Triggering

Номер: US20190025049A1
Принадлежит:

An apparatus for three-dimensional measurement of an object includes a trigger configured to obtain image information from a measurement camera and to trigger, in dependence on image content of the image information, a measurement output or an evaluation of the image information by an evaluator for determining measurement results. Further, a respective method and a respective computer program are described. 1. Apparatus for three-dimensional measurement of an object , comprising:a trigger configured to acquire image information from a measurement camera and to trigger, in dependence on image content of the image information, forwarding of the image information to an evaluator for determining measurement results or an evaluation of the image information by an evaluator for determining measurement results;wherein the trigger is configured to detect when the image content has shifted with respect to a reference image content by at least a predetermined shift or by more than a predetermined shift and to trigger, in dependence on the detection of a shift, forwarding of the image information or the evaluation of the image information by the evaluator for determining measurement results.2. Apparatus according to claim 1 , wherein the trigger is configured to trigger claim 1 , in dependence on the detection of a shift claim 1 , forwarding of the image information or the evaluation of the image information by the evaluator for determining measurement results claim 1 , in order to generate measurement results at a specific spatial distance or to obtain measurement results at equal spatial distances.3. Apparatus according to claim 1 , wherein triggering the measurement output is performed exclusively based on the image content.4. Apparatus according to claim 1 , wherein the trigger is configured to perform image analysis and to trigger the measurement output or the evaluation of the image information in dependence on the image analysis.5. Apparatus according to claim 1 , ...

Подробнее
25-01-2018 дата публикации

INTERACTIVE SECURITY ALERT SYSTEM

Номер: US20180025600A1
Принадлежит:

A security alert system includes a light emitting unit, an image capturing unit, a distance arithmetic unit, a multimedia player unit and an alarm generating unit, wherein the image capturing unit and the distance arithmetic unit capture and calculate varied images of a viewer and an optical pattern emitted from the light emitting unit whether the viewer is within the range of an exhibition/performance activity such that the multimedia player unit displays a corresponding exhibition/performance activity to been seen or heard by the viewer, and if the viewer approaches near a boundary line of the display content, the alarm generating unit generates an alarm signal or sound to be seen or heard by the viewer and hence providing interactive effects between the exhibition/performance activity and the viewer. 1. A security alert system adapted to be installed in an exhibition hall to exhibit a display content , comprising:a light emitting unit adapted to install in the exhibition hall for emitting a light beam consisting of an optical pattern onto a viewer, wherein said optical pattern varies depending on a distance between the viewer and the exhibition/performance activity;an image capturing unit adapted to install in the exhibition hall corresponding to said light emitting unit, including an optical lens for capturing images of said optical pattern and the viewer and a photosensitive element coupled with said optical lens for sensing said images;a distance arithmetic unit adapted to install in the exhibition hall and coupled electrically with said image capturing unit for receiving said images from said image capturing unit and calculating a distance between the viewer and the exhibition/performance activity based on variation of said images;a multimedia player unit adapted to install in the exhibition hall and coupled electrically with said distance arithmetic unit such that when said distance between the viewer and the exhibition/performance activity is smaller than a ...

Подробнее
24-01-2019 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20190025681A1
Автор: IDA Kentaro, Ikeda Takuya
Принадлежит: SONY CORPORATION

The present technology relates to an information processing apparatus, an information processing method, and a program for enabling stabilization of a position of an image projected by a projector. The information processing apparatus includes an acquisition unit configured to acquire projection area information that is information regarding a range of a projection area of a projector, and an image control unit configured to control a display image range that is a range of contents to be displayed in the projection area on the basis of the projection area information. The present technology can be applied to an audio visual (AV) system using a drive-type projector or a handy-type projector, for example. 1. An information processing apparatus comprising:an acquisition unit configured to acquire projection area information that is information regarding a range of a projection area of a projector; andan image control unit configured to control a display image range that is a range of contents to be displayed in the projection area on the basis of the projection area information.2. The information processing apparatus according to claim 1 , whereinthe projector projects an image of a first color corresponding to one image that configures an image of the contents, and an image of a second color corresponding to the one image and different from the first color at different times, andthe image control unit controls a position of the image of the first color or the image of the second color on the basis of the projection area information.3. The information processing apparatus according to claim 2 , whereinthe image control unit performs control such that positions of the image of the first color and the image of the second color, the positions being viewed from the projector, are further separated as a speed at which the projection area of the projector moves becomes faster.4. The information processing apparatus according to claim 1 , whereinthe projector projects an ...

Подробнее
24-01-2019 дата публикации

IMAGE RECOGNITION DEVICE, IMAGE RECOGNITION METHOD AND IMAGE RECOGNITION UNIT

Номер: US20190025986A1
Автор: YAMAUCHI Taisuke
Принадлежит: SEIKO EPSON CORPORATION

The image recognition device includes a measurement point determination section adapted to detect a finger located between a camera and a screen from an image obtained by the camera, and determine a fingertip of the finger, a linear pattern display section adapted to make a projector display a linear pattern on an epipolar line which passes through the fingertip and is determined from a positional relationship between the camera and a projector, and a pattern determination section adapted to determine, from an image including the linear pattern obtained by the camera, a difference between the linear pattern included in that image, and the linear pattern in a case in which the fingertip is absent. 1. An image recognition device used in an image display unit including an imaging device adapted to image an image display surface , and a detecting image display device adapted to display a detecting image on the image display surface , the image recognition device comprising:a measurement point determination section adapted to detect an object located between the imaging device and the image display surface from an image obtained by the imaging device to determine a measurement target point of the object;a linear pattern display section adapted to make the detecting image display device display a linear pattern provided with a periodic pattern on an epipolar line which is determined from a positional relationship between the imaging device and the detecting image display device, and passes through the measurement target point; anda pattern determination section adapted to determine, from an image obtained by the imaging device and including the linear pattern, a difference between the linear pattern included in the image and the linear pattern in a case in which the object is absent.2. The image recognition device according to claim 1 , whereinthe pattern determination section determines continuity of the linear pattern.3. The image recognition device according to claim 1 ...

Подробнее
10-02-2022 дата публикации

MACHINE LEARNING AND VISION-BASED APPROACH TO ZERO VELOCITY UPDATE OBJECT DETECTION

Номер: US20220044424A1
Принадлежит: HONEYWELL INTERNATIONAL INC.

Techniques for detecting motion of a vehicle are disclosed. Optical flow techniques are applied to the entirety of the received images from an optical sensor mounted to a vehicle. Motion detection techniques are then imposed on the optical flow output to remove image portions that correspond to objects moving independent from the vehicle and determine the extent, if any, of movement by the vehicle from the remaining image portions. Motion detection can be performed via a machine learning classifier. In some aspects, motion can be detected by extracting the depth of received images in addition to optical flow. In additional or alternative aspects, the optical flow and/or motion detection techniques can be implemented by at least one artificial neural network. 1. A method of determining if a vehicle is moving , the method comprising:acquiring a first image corresponding to a first time period;acquiring a second image corresponding to a second time period, wherein the first image and the second image are taken from a camera from a frame of reference of the vehicle;detecting, via an optical flow algorithm implemented over the entirety of the first and second images, a difference between one or more portions of the first image and a respective one or more portions of the second image, wherein the one or more image portions correspond to an object moving independent of the vehicle;removing the one or more image portions of the first and second images based on the difference; anddetermining movement of the vehicle based on the remaining image portions via a machine learning algorithm.2. The method of claim 1 , wherein detecting a difference between the image portions between the first and second images further comprises detecting via a depth algorithm implemented over the entirety of the first and second images.3. The method of claim 2 , wherein the depth algorithm and the optical flow algorithm is implemented by at least one artificial neural network.4. The method of ...

Подробнее
23-01-2020 дата публикации

METHOD AND APPARATUS FOR REMOTE SENSING OF OBJECTS UTILIZING RADIATION SPECKLE

Номер: US20200025552A1
Автор: Shirley Lyle G.
Принадлежит:

Disclosed are systems and methods to extract information about the size and shape of an object by observing variations of the radiation pattern caused by illuminating the object with coherent radiation sources and changing the wavelengths of the source. Sensing and image-reconstruction systems and methods are described for recovering the image of an object utilizing projected and transparent reference points and radiation sources such as tunable lasers. Sensing and image-reconstruction systems and methods are also described for rapid sensing of such radiation patterns. A computational system and method is also described for sensing and reconstructing the image from its autocorrelation. This computational approach uses the fact that the autocorrelation is the weighted sum of shifted copies of an image, where the shifts are obtained by sequentially placing each individual scattering cell of the object at the origin of the autocorrelation space. 1. An imaging apparatus comprising:a first source of radiation configured to create a first beam of radiation having a wavelength and illuminate an object to produce a projected reference spot at a first location on the object;a second source of radiation configured to create a second beam of radiation that is mutually coherent with the first beam of radiation and simultaneously illuminate the object to produce a wider-area illumination beam at a second location on the object;a controller configured to control the wavelength;at least one sensor at a location relative to the object configured to determine at least one speckle pattern intensity for at least two instances of the wavelength;a processor configured to receive information from the at least one sensor; andthe processor configured to utilize the projected reference spot and the at least two instances of the wavelength to produce a three-dimensional representation of a region of the second location on the object.2. The imaging apparatus of wherein the second source of ...

Подробнее
10-02-2022 дата публикации

POINT CLOUD ANNOTATION FOR A WAREHOUSE ENVIRONMENT

Номер: US20220044430A1
Принадлежит:

A system is provided for automatic identification and annotation of objects in a point cloud in real time. The system can automatically annotate a point cloud that identifies coordinates of objects in three-dimensional space while data is being collected for the point cloud. The system can train models of physical objects based on training data, and apply the models to point clouds that are generated by various point cloud generating devices to annotate the points in the point clouds with object identifiers. The solution of automatically annotated point cloud can be used for various applications, such as blueprints, map navigation, and determination of robotic movement in a warehouse. 1. A computer-implemented method comprising:receiving, from a vehicle, optical scan data and image data in real time as the vehicle moves in a warehouse;recognizing objects that are represented in the image data;determining identifiers that represent the objects;annotating the optical scan data with the identifiers to generate annotated point cloud data; andtransmitting the annotated point cloud data to a display device configured to present a point cloud based on the annotated point cloud data in real time.2. The computer-implemented method of claim 1 , wherein recognizing objects includes:training an object identification model; andidentifying the objects using the object identification model.3. The computer-implemented method of claim 2 , wherein training an object identification model includes:training the object identification model based on the annotated point cloud data.4. The computer-implemented method of claim 1 , further comprising:identifying first annotated objects that are represented in the annotated point cloud data; andfiltering data indicative of the first annotated objects from the annotated point cloud data, wherein the point cloud is presented without the first annotated objects.5. The computer-implemented method of claim 4 , wherein the first annotated objects ...

Подробнее
23-01-2020 дата публикации

METHOD AND ARRANGEMENT FOR OPTICALLY CAPTURING AN OBJECT WITH A LIGHT PATTERN PROJECTION

Номер: US20200025557A1
Принадлежит:

A method and an arrangement for optically capturing an object with a light pattern projection are provided. The method includes projecting a predetermined number of light patterns on the object, capturing at least one image of the object when each of a respective light pattern is projected to obtain position location dependent image intensity values for a respective projected light pattern, determining a linear combination of the predetermined number of light patterns, and generating a synthesized image intensity value for at least one image location in the image plane which corresponds to an area of the local region. The synthesized image intensity value at the image location is determined by a linear combination of the image intensity values which includes the linear combination of the projection intensity values for the local region, and the projection intensity values are replaced by the image intensity values at the image location. 1. A method for optically capturing an object with a light pattern projection , the method comprising:projecting a predetermined number of light patterns onto the object, each of the light patterns including location dependent projection intensity values in a projection plane;capturing at least one image of the object when a respective light pattern is projected onto the object to obtain position location dependent image intensity values for the respective light pattern in an image plane;determining a linear combination of the predetermined number of light patterns, which yields a predetermined distribution of the projection intensity values in a local region of the projection plane, the local region including an area of the projection plane including at least one location; andgenerating a synthesized image intensity value for at least one image location in the image plane which corresponds to an area of the local region, the synthesized image intensity value at the at least one image location being determined by a linear combination ...

Подробнее
23-01-2020 дата публикации

Imager for Detecting Visual Light and Projected Patterns

Номер: US20200025561A1
Автор: Konolige Kurt
Принадлежит:

Methods and systems for depth sensing are provided. A system includes a first and second optical sensor each including a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band. The system also includes a computing device configured to (i) identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor; (ii) identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second infrared light image captured by the second optical sensor; and (iii) determine a depth estimate for at least one surface in the environment based on the first corresponding features and the second corresponding features. 1. A system comprising:a first optical sensor and a second optical sensor, wherein each optical sensor comprises a first plurality of photodetectors configured to capture visible light interspersed with a second plurality of photodetectors configured to capture infrared light within a particular infrared band, and wherein the first optical sensor is separate from the second optical sensor;a light source configured to project infrared light of a wavelength within the particular infrared band onto an environment; and identify first corresponding features of the environment between a first visible light image captured by the first optical sensor and a second visible light image captured by the second optical sensor, wherein identifying the first corresponding features comprises determining a first correlation surface from the first visible light image and the second visible light image;', 'identify second corresponding features of the environment between a first infrared light image captured by the first optical sensor and a second ...

Подробнее
28-01-2021 дата публикации

Device and Method for Three-dimensionally Measuring Linear Object

Номер: US20210025698A1
Принадлежит: KURASHIKI BOSEKI KABUSHIKI KAISHA

This device for three-dimensionally measuring a linear object includes a stereo camera, a transmission-light illuminator, and an arithmetic device. The stereo camera images a linear object. The transmission-light illuminator faces the stereo camera so that the linear object is placed between the transmission-light illuminator and the stereo camera. The arithmetic device acquires a three-dimensional shape of the linear object. The stereo camera acquires a transmitted-light image of the linear object captured while the linear object is illuminated by the transmission-light illuminator. The arithmetic device acquires the three-dimensional shape of the linear object based on the transmitted-light image. 1. A device for three-dimensionally measuring a linear object , the device comprising:a stereo camera that images a linear object;a transmission-light illuminator facing the stereo camera so that the linear object is placed between the transmission-light illuminator and the stereo camera; andan arithmetic device that acquires a three-dimensional shape of the linear object, whereinthe stereo camera acquires a transmitted-light image of the linear object captured while the linear object is illuminated by the transmission-light illuminator, andthe arithmetic device acquires the three-dimensional shape of the linear object based on the transmitted-light image.2. The device for three-dimensionally measuring a linear object according to claim 1 , whereinthe stereo camera acquires a reflected-light image of the linear object captured while the linear object is not illuminated by the transmission-light illuminator, andthe arithmetic device acquires the three-dimensional shape of the linear object based on the transmitted-light image and the reflected-light image.3. The device for three-dimensionally measuring a linear object according to claim 1 , whereinthe stereo camera includes a first camera and a second camera,the stereo camera acquires a first reflected-light image and a ...

Подробнее
28-01-2021 дата публикации

APPARATUS AND METHOD FOR CHECKING TYRES

Номер: US20210025833A1
Принадлежит: PIRELLI TYRE S.P.A.

Apparatus () for checking tyres, comprising: a support frame (); a flange (); and an acquisition system () of three-dimensional images of a surface of a tyre, the acquisition system being mounted on the support frame and comprising: a matrix camera (), a linear laser source (), and a reflecting surface () which intersects the propagation axis () of the linear laser beam and the optical axis () of the matrix camera (), wherein a first angle () formed between a first section () and a second section () of the optical axis () mutually symmetrical with respect to a normal to the reflecting surface in the respective point of incidence to the reflecting surface, is obtuse, and wherein a second angle () formed between a first section () and a second section () of the propagation axis () mutually symmetrical with respect to a normal to the reflecting surface in the respective point of incidence to the reflecting surface, is obtuse. 136-. (canceled)37. A method for checking tyres , comprising:arranging a tyre to be checked; a support frame;', 'a flange fixed on the support frame for attaching the support frame to a movement member of the apparatus; and', a matrix camera with an optical axis,', 'a linear laser source to emit a linear laser beam having a propagation plane and a propagation axis, and', the first section and the second section of the propagation axis are rectilinear sections incident on the reflecting surface in a respective point of incidence and mirror each other with respect to a line perpendicular to the reflecting surface in the respective point of incidence;', 'the first section and the second section of the optical axis are rectilinear sections incident on the reflecting surface in a respective point of incidence and mirror each other with respect to a line perpendicular to the reflecting surface in the respective point of incidence;', 'the first section of the propagation axis is situated on the side of the matrix camera with respect to the point of ...

Подробнее
24-01-2019 дата публикации

Determining a mark in a data record with three-dimensional surface coordinates of a scene, captured by at least one laser scanner

Номер: US20190026899A1
Автор: Florian Seeleitner

A method for determining a mark in a data record with three-dimensional surface coordinates of a scene includes ascertaining a first collection of edge points in a three-dimensional coordinate system of the data record, fitting an equalization area into at least a subset of the edge points of the first collection of edge points to permit the edge points in the three-dimensional coordinate system to be partly positioned on a first side of the equalization area and partly positioned on a second side, lying opposite the first side, of the equalization area, displacing edge points of the first collection of edge points into the equalization area to permit a corrected collection of edge points to be formed, and determining the mark in the three-dimensional coordinate system based on the corrected collection of edge points or the corrected closed circumferential edge line.

Подробнее
24-01-2019 дата публикации

DUAL-PATTERN OPTICAL 3D DIMENSIONING

Номер: US20190026912A1
Автор: FENG CHEN, Xian Tao
Принадлежит:

An optical dimensioning system includes one or more light emitting assemblies configured to project a predetermined pattern on an object; an imaging assembly configured to sense light scattered and/or reflected of the object, and to capture an image of the object while the pattern is projected; and a processing assembly configured to analyze the image of the object to determine one or more dimension parameters of the object. A method for optical dimensioning includes illuminating an object with at least two identical patterns; capturing at least one image of the illuminated object; and calculating dimensions of the object by analyzing pattern separation of the elements comprising the projected patterns. The patterns can be produced by one or more pattern generators. 1. A dimensioning assembly , comprising:a camera module having one or more image sensors and an imaging lens assembly;a first pattern generator, disposed near the camera module, and having a first laser diode and a first pattern projection assembly;a second pattern generator, disposed near the camera module and spaced apart from the first pattern generator, and having a second laser diode and a second pattern projection assembly, wherein the first and the second pattern projection assemblies are configured to generate identical patterns; anda processing system configured to detect and analyze positions of elements of the generated patterns.2. The assembly according to claim 1 , wherein the first and second pattern generators are equidistant from the camera module.3. The assembly according to claim 1 , wherein the first and/or second laser diode comprises a vertical-cavity surface-emitting laser.4. The assembly according to claim 1 , wherein the first and/or second laser diode comprises an edge-emitting laser.5. The assembly according to claim 1 , wherein the first and/or second pattern projection assembly includes a projection lens and a pattern die and/or a collimating lens and a diffractive optical ...

Подробнее
23-01-2020 дата публикации

METHOD OF RELEASING SECURITY USING SPATIAL INFORMATION ASSOCIATED WITH ROBOT AND ROBOT THEREOF

Номер: US20200026845A1
Автор: JEON Chanyong
Принадлежит:

Provided is a method of releasing security using spatial information associated with a robot, the method including capturing the robot using a camera of a security device, acquiring first release information including at least one of security release information and the spatial information based on an image acquired by capturing the robot, comparing the first release information to predetermined second release information, and determining to release the security when the first release information matches the second release information, wherein the spatial information is information on a position of the robot in which a security release function is executed. In addition, a security device operating based on a security releasing method and a non-transitory computer readable recording medium including a computer program for performing the security releasing method are provided. 1. A method of releasing security of a security device , using spatial information associated with a robot , the method comprising:capturing the robot using a camera of the security device;acquiring first release information including at least one of security release information and the spatial information based on an image acquired by capturing the robot;comparing the first release information to predetermined second release information; anddetermining to release the security when the first release information matches the second release information,wherein the spatial information is information on a position of the robot in which a security release function is executed.2. The method of claim 1 , wherein the acquiring of the first release information comprises:acquiring the spatial information based on at least one of a position of the robot and a captured size of the robot in the image obtained using the camera.3. The method of claim 1 , wherein the capturing of the robot comprises capturing the robot to acquire depth information andthe acquiring of the first release information comprises ...

Подробнее
23-01-2020 дата публикации

COMPENSATING OPTICAL COHERENCE TOMOGRAPHY SCANS

Номер: US20200027227A1
Автор: Holmes Jonathan Denis
Принадлежит:

A method of processing optical coherence tomography (OCT) scans, comprising: receiving OCT data comprising an OCT signal indicative of the level of scattering in a sample, the OCT data including the OCT signal for at least one scan through the sample, with the OCT signal having been measured at varying depth and position through the sample in each scan; processing the OCT data for each scan with depth to produce a indicative depth scan representative of the OCT signal at each depth through all of the scans; fitting a curve to the indicative depth scan, the curve comprising a first term which exponentially decays with respect to the depth and a second term which depends on the noise in the OCT signal; and calculating a compensated intensity for the OCT signal at each point through each scan, the compensated intensity comprising a ratio of a term comprising a logarithm of the OCT signal to a term comprising the logarithm of the fitted curve. 1. A method of processing optical coherence tomography (OCT) scans , comprising:receiving OCT data comprising an OCT signal indicative of the level of scattering in a sample, the OCT data including the OCT signal for at least one scan through the sample, with the OCT signal having been measured at varying depth and position through the sample in each scan;processing the OCT data for each scan with depth to produce an indicative depth scan representative of the OCT signal at each depth through all of the scans;fitting a curve to the indicative depth scan, the curve comprising a first term which exponentially decays with respect to the depth and a second term which depends on the noise in the OCT signal; andcalculating a compensated intensity for the OCT signal at each point through each scan, the compensated intensity comprising a ratio of a term comprising a logarithm of the OCT signal to a term comprising the logarithm of the fitted curve.2. The method of claim 1 , comprising generating an image in which each pixel of the image ...

Подробнее
23-01-2020 дата публикации

METHOD OF RECONSTRUCTING THREE DIMENSIONAL IMAGE USING STRUCTURED LIGHT PATTERN SYSTEM

Номер: US20200027228A1
Принадлежит:

A method of reconstructing a three dimensional image using a structured light pattern system is provided as follows. A class identifier of an observed pixel on a captured image by a camera is extracted. The observed pixel has a coordinate (x, y) on the captured image. A first relative position of the x coordinate of the observed pixel in a tile domain of the captured image is calculated. A second relative position of one of a plurality of dots in a tile domain of a reference image using the extracted class identifier is calculated. A disparity of the observed pixel using the first relative position and the second relative position is calculated. 1. A method of reconstructing a three dimensional image using a structured light pattern system , the method comprising:extracting a class identifier of an observed pixel on a captured image by a camera, wherein the observed pixel has a coordinate (x, y) on the captured image;calculating a first relative position of the x coordinate of the observed pixel in a tile domain of the captured image;calculating a second relative position of one of a plurality of dots in a tile domain of a reference image using the extracted class identifier; andcalculating a disparity of the observed pixel using the first relative position and the second relative position.2. The method of claim 1 ,wherein the calculating of the first relative position includes:translating the x coordinate of the observed pixel into the tile domain of the captured image to generate the first relative position in the tile domain of the captured image.3. The method of claim 1 , further comprising:generating a projected image to be projected onto an object from the reference image, wherein the reference image includes a plurality of tiles,wherein each of the tiles includes the plurality of dots each of which is assigned to one of a plurality of class identifiers and wherein the extracted class identifier is one of the plurality of class identifiers; andgenerating the ...

Подробнее
23-01-2020 дата публикации

Annotation cross-labeling for autonomous control systems

Номер: US20200027229A1
Автор: Anting Shen
Принадлежит: Tesla Inc

An annotation system uses annotations for a first set of sensor measurements from a first sensor to identify annotations for a second set of sensor measurements from a second sensor. The annotation system identifies reference annotations in the first set of sensor measurements that indicates a location of a characteristic object in the two-dimensional space. The annotation system determines a spatial region in the three-dimensional space of the second set of sensor measurements that corresponds to a portion of the scene represented in the annotation of the first set of sensor measurements. The annotation system determines annotations within the spatial region of the second set of sensor measurements that indicates a location of the characteristic object in the three-dimensional space.

Подробнее
23-01-2020 дата публикации

INDEXATION OF MASSIVE POINT CLOUD DATA FOR EFFICIENT VISUALIZATION

Номер: US20200027248A1
Принадлежит: MY VIRTUAL REALITY SOFTWARE AS

A method for pre-processing point clouds comprising large amounts of point data. The method comprises converting the points' coordinates to Morton indices, sorting the Morton indices and sequentially determining intervals based on predefined criteria, which intervals define the leaf nodes and form the basis and starting point for the generation of a tree index structure comprising the leaf nodes, nodes, branches and nodes connecting the branches. Point data contained within a node or sub-trees of a node are quantizable. 3. The method according to claim 2 , wherein the tree index structure is generated from the determined leaf nodes by determining a branch node from a determined leaf node to a previously determined leaf node and deriving therefrom the nodes and the branches of the tree index structure and claim 2 , thereby claim 2 , the tree index structure.6. The method according to claim 1 , wherein the leaf nodes are determined based on the Morton indices by determining a last Morton index comprised within a corresponding interval and said last Morton index defines the corresponding leaf node.7. The method according to claim 1 , wherein the leaf nodes are determined based on the corresponding point coordinates using a bitwise or operator.8. The method according to claim 1 , wherein the points of the point cloud are scattered onto processing buckets such that the points within a processing bucket can be pre-processed in parallel to points within another processing bucket.10. The method according to claim 2 , the method further comprising quantizing points comprised within nodes or sub-trees of a node byrandomly choosing a sub-set of points out of the points comprised within a node or sub-tree of a node, wherein the points of the sub-set represent quantized points, oremploying either a regular or random raster on the points wherein the raster comprises vertices and wherein a vertex of said vertices is said to be empty or occupied depending on whether a point of the ...

Подробнее