Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 5918. Отображено 200.
04-02-2021 дата публикации

СПОСОБ, УСТРОЙСТВО И ПОТОК ФОРМАТИРОВАНИЯ ИММЕРСИВНОГО ВИДЕО ДЛЯ УСТРОЙСТВ УНАСЛЕДОВАННОГО И ИММЕРСИВНОГО РЕНДЕРИНГА

Номер: RU2742344C2

Изобретение относится к области кодирования и декодирования видео. Технический результат заключается в улучшении кодирования обратно совместимого иммерсивного видео. Поток переносит данные, представляющие иммерсивное видео, состоящее из кадра, организованного согласно схеме, содержащей первую область, кодированную согласно прямоугольному преобразованию, вторую область, кодированную согласно преобразованию, переходящему от прямоугольного преобразования к иммерсивному преобразованию, и третью область, кодированную согласно иммерсивному преобразованию. Для обратной совместимости поток дополнительно содержит первую информацию, представляющую размер и местоположение первой области в видеокадре, и вторую информацию, содержащую, по меньшей мере, тип выбранной схемы, поле зрения первой части, размер упомянутой второй области в видеокадре и опорное направление. 6 н. и 9 з.п. ф-лы, 11 ил.

Подробнее
03-11-2021 дата публикации

СПОСОБ ОБРАБОТКИ ВЗАИМОУВЯЗАННЫХ СПЕЦЭФФЕКТОВ ДЛЯ ВИДЕО, НОСИТЕЛЬ ДАННЫХ И ТЕРМИНАЛ

Номер: RU2758910C1

Изобретение относится к области обработки информации, в частности к способу обработки взаимоувязанных спецэффектов для видео, а также к носителю данных и терминалу. Техническим результатом является повышение эффективности замены первого спецэффекта и/или второго спецэффекта. Предложен способ обработки взаимоувязанных спецэффектов для видео предусматривает: получение опорного видео, содержащего первый спецэффект; получение второго спецэффекта, взаимоувязанного с первым спецэффектом; и получение видео, содержащего второй спецэффект, путем обработки изображения опорного видео на основании второго спецэффекта. 5 н. и 6 з.п. ф-лы, 2 ил.

Подробнее
20-02-2011 дата публикации

СПОСОБ И УСТРОЙСТВО ОБРАБОТКИ ДАННЫХ

Номер: RU2009130339A
Принадлежит:

... 1. Способ обработки данных для пикселей (7) поля зрения (5), в котором поле зрения содержит участок цифровой карты (1), которую необходимо отобразить и включает в себя множество пикселей (7), цифровая карта (1) содержит множество фрагментов (3) данных, каждый который включает в себя, по меньшей мере, одну точку данных, а поле зрения (5) включает в себя множество упомянутых фрагментов (3) данных, причем способ отличается тем, что ! (i) идентифицируют (504), для упомянутого пикселя (7), фрагмент (3) данных, в котором лежит упомянутый пиксель (7); ! (ii) определяют местоположение (506-512) границы (9) упомянутого фрагмента (3) данных, который лежит внутри упомянутого поля зрения (5); ! (iii) обрабатывают (522) все пиксели (7) упомянутого поля зрения (5), которые лежат внутри упомянутой границы (9), для предоставления обработанного фрагмента данных; ! (iv) определяют местоположение (504-512) для каждого из всех необработанных фрагментов (3) данных внутри поля зрения (5), которые являются смежными ...

Подробнее
27-05-2009 дата публикации

ВОСПРИЯТИЕ ГЛУБИНЫ

Номер: RU2007142457A
Принадлежит:

... 1. Блок (300) воспроизведения для воспроизведения выходного изображения (102), содержащего выходные пиксели, на основании входного изображения (100), содержащего входные пиксели, и на основании связанных с глубиной элементов данных, соответствующих соответствующим входным пикселям, при этом блок (300) воспроизведения содержит: ! блок (302) вычисления смещения для вычисления значений смещения, подлежащих применению к входным пикселям, на основании соответствующих связанных с глубиной элементов данных; и ! блок (304) интерполяции для вычисления выходных пикселей на основании смещения входных пикселей на соответствующие значения смещения, отличающийся тем, что блок (302) вычисления смещения выполнен с возможностью выдавать выходное значение в качестве первого одного из значений смещения, которое по существу равно нулю, если соответствующий первый один из входных пикселей соответствует наложению, независимо от соответствующего связанного с глубиной элемента данных. ! 2. Блок (300) воспроизведения ...

Подробнее
10-09-2003 дата публикации

Система и способ рефракционного отображения

Номер: RU2001124858A
Принадлежит:

... 1. Система для рефракционного отображения, имеющая устройство для перспективного проецирования объекта и для получения его двумерной текстуры для текстурной плоскости и устройство для определения адреса преломленной текстуры, по которому в соответствии с явлением преломления перемещается по крайней мере часть текстуры, в которой адрес преломленной текстуры определяется углом между нормальным вектором и вектором луча обзора или составляющей нормального вектора, которая параллельна текстурной плоскости. 2. Система по п.1, в которой в том случае, когда нормальный вектор берется в точке пересечения на границе раздела между по крайней мере двумя различными средами с вектором луча обзора и обозначается как n, а единичный вектор луча обзора обозначается как v, определяется переменная k, при которой сумма (n+kv) нормального вектора n и величины kv, равной произведению вектора v луча обзора и переменной k, становится параллельной текстурной плоскости, при этом адрес преломленной текстуры определяется ...

Подробнее
27-09-2001 дата публикации

СПОСОБ ОБРАБОТКИ ИЗОБРАЖЕНИЙ

Номер: RU99126413A
Принадлежит:

... 1. Способ обработки изображений, в частности, преобразования двумерного изображения трехмерного реального объекта в трехмерное представление того же трехмерного реального объекта в системе, где объект состоит из элементов в составе двумерного изображения и где двумерное изображение получают с помощью камеры, отличающийся наличием следующих этапов: задание некоторой плоскости отсчета, соответствующей фокальной плоскости камеры и лежащей максимально близко к фокальной плоскости камеры, причем плоскость отсчета содержит элементы, соответствующие элементам в составе двумерного изображения; задание параметров цвета - цветового тона, насыщенности и яркости - для каждого элемента, лежащего в плоскости отсчета; создание некоторой шкалы отсчета путем определения значений параметров цвета с помощью последовательности отдельных изображений, каждое из которых отображает объект в различных заданных фокальных плоскостях, при этом различия параметров цвета между соответствующими фокальными плоскостями ...

Подробнее
23-12-2024 дата публикации

ДИФРАКЦИОННОЕ МОДЕЛИРОВАНИЕ, ОСНОВАННОЕ НА НАХОЖДЕНИИ ПУТИ ПО СЕТКЕ

Номер: RU2832227C1

Изобретение относится к способу обработки аудиоконтента для формирования звука в трехмерной аудиосцене, принимая во внимание эффекты дифракции, вызванные элементами трехмерной аудиосцены. В частности, настоящее изобретение относится к способу акустического дифракционного моделирования, основанного на нахождении пути по сетке. Настоящее изобретение дополнительно относится к соответствующим устройству и машиночитаемому носителю данных. Техническим результатом является создание способа и устройства обработки аудиоконтента для обеспечения реалистичного формирования звука в виртуальных трехмерных аудиосценах, включая преграждающие элементы. В заявленном способе обработки аудиоконтента для формирования в трехмерной аудиосцене аудиоконтент содержит источник звука в положении источника. Способ включает получение вокселизированного представления трехмерной аудиосцены, при этом вокселизированное представление указывает объемные элементы, в которых может распространяться звук, и объемные элементы, ...

Подробнее
28-10-2010 дата публикации

Verfahren und Software für Bildtransformation

Номер: DE112008002083T5
Принадлежит: ATELIER VISION LTD, ATELIER VISION LTD.

Verfahren zur Bildbearbeitung, umfassend Auswahl eines Fixpunktes in einem Bild, wobei der Fixpunkt ein Fokuspunkt des Bildes ist; Auswahl eines Fixbereichs im Bild, wobei der Fixbereich einen Raum rundum den Fixpunkt umfasst; und Entordnen des Bildes außerhalb des Fixbereichs in Funktion einer Entfernung.

Подробнее
10-03-2011 дата публикации

NACHBEREITUNG VON ANZEIGE ÄNDERUNGEN

Номер: DE0060239067D1
Принадлежит: INTELLOCITY USA INC, INTELLOCITY USA INC.

Подробнее
23-09-1998 дата публикации

Display techniques for three dimensional virtual reality

Номер: GB0009816367D0
Автор:
Принадлежит:

Подробнее
02-03-1983 дата публикации

SIMULATING FIELD OF VIEW FOR WEAPON TRAINING

Номер: GB0008302644D0
Автор:
Принадлежит:

Подробнее
18-05-1983 дата публикации

IMAGE GENERATOR

Номер: GB0008309868D0
Автор:
Принадлежит:

Подробнее
18-02-1998 дата публикации

METHOD AND APPARATUS FOR OVERLAYING A BIT MAP IMAGE ON AN ENVIRONMENT MAP

Номер: GB0002316257A
Принадлежит:

The computer system of the present invention generates a view of a scene by storing in memory colour values associated with elements of an environment map representing the scene and colour values associated with elements of a bit map image that is separate from the environment map. The bit map image is orientated with respect to the coordinate system of the environment map. The environment map is projected onto a view window that comprises an array of pixels. For at least one pixel of the view window covered by the bit map image, the element of the bit map image that corresponds to the pixel of the view plane is determined and a colour value of the pixel is derived based upon the colour value of the corresponding element of the bit map image. The derived colour value of the pixel of the view window is stored for display. The computer system may also store in memory depth values associated with the elements of the bit map image and depth values associated with pixels of the view window.

Подробнее
16-05-2012 дата публикации

Shear displacement depth of field

Номер: GB0002460994B
Принадлежит: PIXAR [US], PIXAR

Подробнее
24-07-2019 дата публикации

Topology preservation in a graphics pipeline

Номер: GB0002570304A
Автор: JOHN HOWSON, John Howson
Принадлежит:

A graphics processing engine that comprises a geometry shading stage having two modes of operation is described. In the first mode of operation, each primitive output by the geometry shading stage is independent, whereas in the second mode of operation, connectivity between input primitives is maintained by the geometry shading stage. The mode of operation of the geometry shading stage can be determined based on the value of control state data which may be generated at compile-time for a geometry shader based on analysis of that geometry shader. During processing if it is determined that primitive connectivity is not preserved then that shader code is output whereas if primitive connectivity is preserved then including that shader code in a previous stage of the graphics pipeline. The application aims to reduce the processing load of primitive processing where primitives share common vertices.

Подробнее
29-01-2020 дата публикации

Method and system for providing at least a portion of content having six degrees of freedom motion

Номер: GB0002575932A
Принадлежит:

The present invention provides a method for providing with at least a portion of content having six degrees-of-freedom in a virtual environment, comprising: receiving the portion of content for the virtual environment and associating it with at least one of a first geometric shape and a second geometric shape. The portion of content is then projected onto a first point of a surface of the first geometric shape and determining, based on the projecting of the portion of content onto the first point, a first outcome relating to the portion of content at the first position in the virtual environment. The portion of content is then projected onto a second point of the surface of the first geometric shape or of a surface of the second geometric shape, the second point being different than the first point and determining, based on the projecting of the portion of content onto the second point, a second outcome relating to the portion of content at a second position in the virtual environment, ...

Подробнее
02-03-1983 дата публикации

FIELD OF VIEW SIMULATION FOR WEAPONS TRAINING

Номер: GB0008302645D0
Автор:
Принадлежит:

Подробнее
24-07-1991 дата публикации

IMAGE GENERATION SYSTEM FOR 3-D SIMULATIONS

Номер: GB0009112073D0
Автор:
Принадлежит:

Подробнее
21-11-2018 дата публикации

Visualisation system for needling

Номер: GB0002562502A
Принадлежит:

Scan data representative of an interior portion of a body is received from a scanning portion S200-S202; a first set of positional and orientational data of at least one user is received and a second set in relation the scanning portion is received; rendered views are generated based on the scan, positional, and orientational data S204-S214; the rendered views are modified based on a user input and combined into a scene for display S216. The scan data may be three-dimensional ultrasound data and the system may further comprise a needle. The system may enable a surgeon to identify the position of a needle and the position of organs whilst performing a procedure. A support portion may support and move the scanning potion. The scene may be displayed on at least one headset; a rendered view may be generated for each of a plurality of headsets based on their respective positional data.

Подробнее
16-05-2018 дата публикации

Task assembly

Номер: GB0002555929A
Принадлежит:

Graphics processing system (300) cache (336) stores graphics data items for use in rendering primitives (317). Task entries are stored for respective tasks to which computation instances (CI) (e.g. shading & data items) can be allocated in a task assembly unit (340) of the system. It is determined whether items relating to primitives to be rendered are present in the cache, the CI may be for generating graphics data items which may not be present in the cache. The task entries indicate which CI have been allocated to the respective tasks, and the task entries are associated with characteristics (e.g. shader type, state) of CI which are allocated to respective tasks. A computation instance to be executed is allocated to a task based on the CI characteristics, i.e. shader type (vertex, hull, domain or geometry shader). SIMD processing logic (346) executes CI of a task outputted from the task assembly unit to thereby determine graphics data items for storage in the cache, which are used in ...

Подробнее
15-04-2008 дата публикации

PORTABLE VIRTUAL REALITY

Номер: AT0000391312T
Принадлежит:

Подробнее
15-08-2010 дата публикации

PROCEDURE AND UNIT FOR THE SCALING OF A DREIDIMENSIONELLEN MODEL AND INDICATOR

Номер: AT0000475153T
Принадлежит:

Подробнее
15-01-2009 дата публикации

LAYING ON TOP OF EACH OTHER EQUIPMENT AND - PROCEDURES FOR SENDING THREE-DIMENSIONAL GRAPHIC ONE COMPUTER PICTURE

Номер: AT0000418767T
Принадлежит:

Подробнее
15-09-2000 дата публикации

VIRTUAL REALITY GENERATOR FOR FINANCIAL MESSAGES

Номер: AT0000196024T
Принадлежит:

Подробнее
26-02-1997 дата публикации

Method and apparatus for span and subspan sorting rendering system

Номер: AU0006714796A
Принадлежит:

Подробнее
02-09-2021 дата публикации

Generating technical drawings from building information models

Номер: AU2020221451A1
Принадлежит:

The present disclosure is directed to a software tool that facilitates the presentation of a three-dimensional view of a construction project as well as the generation of various types of two-dimensional technical drawings based on this three-dimensional view. In one implementation, the software tool causes a computing device to engage in the following operations. The computing device may receive an indication of a desired clip height of a three-dimensional view at which to generate a two-dimensional technical drawing; identify a subset of meshes that intersect with a two-dimensional plane at the desired clip height; determine respective portions of each mesh that intersect the two-dimensional plane at the desired clip height; compile a dataset that defines the two-dimensional drawing; and render the two-dimensional drawing using the compiled dataset.

Подробнее
17-04-2001 дата публикации

Method and apparatus for rendering images with refractions

Номер: AU0006875500A
Принадлежит:

Подробнее
03-04-2014 дата публикации

A VIRTUAL 3D PAPER

Номер: AU2013200630A1
Автор: SO KA YAN, SO, KA YAN
Принадлежит:

The invention discloses a virtual 3D paper, comprising data reader (1) for obtaining data, a multi touch gesture recognition engine (2) for receiving and recognizing multi touch signals, an event dispatching engine (3) for dispatching events according to the action of multi touch gesture recognition engine (2), an editing module (4) for editing data obtained by data reader (1), a rendering module (5) for rendering data edited by editing module (4), a display monitor (6) for displaying the rendered results of rendering module (5), and data exporter (7) for exporting the rendered results. The virtual 3D paper supports multi-point touch, may recognize kinds of gestures and read different types of files, and thus is more practical, more real and has a much better user experience. Figure 3 Figure 4 ...

Подробнее
29-07-2021 дата публикации

Portable photogrammetry studio

Номер: AU2016267397B2
Автор: HILTON JOHN, Hilton, John
Принадлежит:

A portable photogrammetry studio for topographical digitisation of stationary surfaces, such as human body surfaces, including at least one high-resolution digital camera to photograph said surface; one or more light sources capable of illumination of said surface; and a mechanised viewing system that is capable of providing different views of said surface to said camera; such that multiple pictures may be taken of said surface from multiple different positions by each said camera. The human body surface is prepared, preferably by coating with a texturiser.

Подробнее
08-04-2021 дата публикации

Localization determination for mixed reality systems

Номер: AU2018210015B2
Принадлежит:

To enable shared user experiences using augmented reality systems, shared reference points must be provided to have consistent placement (position and orientation) of virtual objects. Furthermore, the position and orientation (pose) of the users must be determinable with respect to the same shared reference points. However, without highly sensitive and expensive global positioning system (GPS) devices, pose information can be difficult to determine to a reasonable level of accuracy. Therefore, what is provided is an alternative approach to determining pose information for augmented reality systems, which can be used to perform location based content acquisition and sharing. Further, what is provided is an alternative approach to determining pose information for augmented reality systems that uses information from already existing GPS devices.

Подробнее
15-07-2021 дата публикации

Floor Plan Bot

Номер: AU2021102857A4
Автор: YU WENYANG, YU, WENYANG
Принадлежит:

Abstract Floor Plan Bot is a 3D scanning and modeling App which is a platform for creating and processing high-precision 3D models of real-world objects on the go. This technology can construct three-dimensional patterns quickly on the APP by measuring individuals' movements via infrared rays. People can hold their mobiles and walk in the physical space to decide the length and width of the objects via infrared. This app enables people's ideas to turn into real 3D models accurately, which can be used in architecture and interior design. It is accurate, time-saving on-site, cost-saving, and flexible in use. Multi-dimensional drawing types can be generated according to customers' choice of style. It is also feasible and functional that you can add any furniture which is desirable in the spatial locations.

Подробнее
08-07-1999 дата публикации

Multiple suppression in geophysical data

Номер: AU0000707279B2
Принадлежит:

Подробнее
12-07-2001 дата публикации

Method for image processing

Номер: AU0000735613B2
Принадлежит:

Подробнее
06-04-2006 дата публикации

SYSTEM AND METHOD FOR PROCESSING VIDEO IMAGES

Номер: CA0002581273A1
Принадлежит:

Подробнее
05-10-2017 дата публикации

INTERACTIONS WITH 3D VIRTUAL OBJECTS USING POSES AND MULTIPLE-DOF CONTROLLERS

Номер: CA0003018758A1
Принадлежит:

A wearable system can comprise a display system configured to present virtual content in a three-dimensional space, a user input device configured to receive a user input, and one or more sensors configured to detect a user's pose. The wearable system can support various user interactions with objects in the user's environment based on contextual information. As an example, the wearable system can adjust the size of an aperture of a virtual cone during a cone cast (e.g., with the user's poses) based on the contextual information. As another example, the wearable system can adjust the amount of movement of virtual objects associated with an actuation of the user input device based on the contextual information.

Подробнее
26-07-2018 дата публикации

LOCALIZATION DETERMINATION FOR MIXED REALITY SYSTEMS

Номер: CA0003047013A1
Принадлежит: RICHES, MCKENZIE & HERBERT LLP

To enable shared user experiences using augmented reality systems, shared reference points must be provided to have consistent placement (position and orientation) of virtual objects. Furthermore, the position and orientation (pose) of the users must be determinable with respect to the same shared reference points. However, without highly sensitive and expensive global positioning system (GPS) devices, pose information can be difficult to determine to a reasonable level of accuracy. Therefore, what is provided is an alternative approach to determining pose information for augmented reality systems, which can be used to perform location based content acquisition and sharing. Further, what is provided is an alternative approach to determining pose information for augmented reality systems that uses information from already existing GPS devices.

Подробнее
06-05-1995 дата публикации

METHOD AND SYSTEM FOR CONSTRUCTING AND INTERACTING WITH DATALANDSCAPES

Номер: CA0002123734A1
Принадлежит:

A data visualization system comprises data landscapes including a series of signs, which relate data to one or more visual attributes, and which are grouped within cells in the landscape. The user may navigate about the landscape to identify particular cells of interest and may conduct interactive animated filtering of data to observe the effect of changing one or more parameters of the data. A set of tools is provided to allow the user to construct landscapes and assign various visual attributes to diverse types of data.

Подробнее
02-01-2001 дата публикации

WEATHER SIMULATION SYSTEM

Номер: CA0002174090C

A weather stimulation system that generates and distributes weather data to simulation subsystems for the real time simulation of weather conditions, from three-dimensional real world data. A real world database (11) is accessed to obtain a dataspace of weather data elements, each having a set of various weather-related parameters. For "out-the-window" weather displays, these data elements are preprocessed (14) to obtain color and transparency values for each data element. The preprocessed data elements are further processed to obtain a prioritized display list of those data elements that are in a field of view. Each data element in this list is assigned a graphics primitive, whose alignment is determined by a wind vector of that data element. Pixel values are assigned to the graphics primitives, using color and transparency values of the associated data elements.

Подробнее
23-05-1997 дата публикации

HIGH PERFORMANCE/LOW COST VIDEO GAME SYSTEM WITH MULTI-FUNCTIONAL PERIPHERAL PROCESSING SUBSYSTEM

Номер: CA0002190933A1
Принадлежит: GOWLING LAFLEUR HENDERSON LLP

A video game system includes a game cartridge which is pluggably attached to a main console having a main processor, a 3D graphics generating coprocessor, expandable main memory and player controllers. A multifunctional peripheral processing subsystem external to the game microprocessor and coprocessor is described which executes commands for handling player controller input/output to thereby lessen the processing burden on the graphics processing subsystem. The player controller processing subsystem is used for both controlling player contr oller input/output processing and for performing game authenticating security checks continuously during game play. The peripheral interface includes a micro-process or for controlling various peripheral interface functions, a read/write random acce ss memory, a boot ROM, a coprocessor command channel interface, a player controller channel interface, etc., which components interact to efficiently process player controller commands while also performing ...

Подробнее
14-03-1996 дата публикации

APPARATUS AND METHOD FOR REAL-TIME VOLUME VISUALIZATION

Номер: CA0002198611A1
Принадлежит:

Real-time processing of voxels and real-time volume visualization of objects and scenes in a highly parallel and pipelined manner including a threedimensional skewed memory (22), two-dimensional skewed buffers (24), 3-D interpolation and shading of data points, and signal compositing. Implementing ray-casting, a powerful volume rendering technique. Viewing rays are cast from the viewing position into a cubic frame buffer (40) and beams of voxels, which are parallel to the face of the cubic frame buffer (40), are accessed. At evenly spaced sample points along each viewing ray, each sample point is trilinearly interpolated using values of surrounding voxels. Using a gradient and the interpolated sample values, a local shading model (30) is applied and a sample opacity is assigned. Finally, ray samples along the ray are composited into pixel values and provided to a display device (44) to produce an image.

Подробнее
04-02-1999 дата публикации

DISPLAY TECHNIQUES FOR THREE DIMENSIONAL VIRTUAL REALITY

Номер: CA0002242166A1
Принадлежит:

A limitation of a three-dimensional world in which objects in the distance may b e represented in only two dimensions as a video on a screen occurs when a computer graphic object represented by computer graphics, e.g., in front of, to the side, above, or below the video screen, undergoes a trajectory that takes it to a location in th e world that is not represented as computer graphics, but instead is within the field represe nted by the video, because such an object would disappear from view by the viewer. This limi tation is overcome by having such an object be represented as video on the screen, rather than computer graphics. Thus, the computer graphics object "goes into the video" as v ideo and remains visible to a viewer in front of the video screen, rather than becomi ng invisible because it is blocked from view by the video screen if it were to be generated a t its proper location using computer graphic techniques.

Подробнее
23-08-2011 дата публикации

GRAPHICAL USER INTERFACES FOR COMPUTER VISION SYSTEMS

Номер: CA0002258025C
Принадлежит: CRITICOM CORPORATION, CRITICOM CORP

Computer vision systems provide a user with a view of a scene whereby an image of the scene may have been augmented with information generated by a computer. Graphic user interfaces (131) operably interact with geometric constructs of a user environment, object within a scene, perspective of the scene, image features of a signal which represents the scene, among others (131).

Подробнее
16-07-1998 дата публикации

PIXEL REORDERING FOR IMPROVED TEXTURE MAPPING

Номер: CA0002275237A1
Принадлежит:

A system and method for reordering memory references for pixels to improved bandwidth and performance in texture mapping systems and other graphics systems by improving memory locality in conventional page-mode memory systems. Pixel memory references are received from a client graphics engine and placed in a pixel priority heap (202). The pixel priority heap (202) reorders the pixel memory references so that references requiring a currently open page are, in general, processed before references that require page breaks. Reordered pixel memory references are transmitted to a memory controller (204) for accessing memory (205, 207).

Подробнее
30-06-1987 дата публикации

DEVICE TO THE SIMULATION OF THE FREE VIEW OF MEANS OF OPTICAL EQUIPMENT.

Номер: CH0000661132A5
Принадлежит: HONEYWELL GMBH

Подробнее
14-01-2000 дата публикации

Electrooptical apparatus for the production of a picture.

Номер: CH0000689904A5
Автор: ELLENBY, JOHN
Принадлежит: CRITICOM CORP, CRITICOM CORPORATION

Подробнее
07-12-2018 дата публикации

Method for customizing furniture based on automatic measurement

Номер: CN0108961379A
Автор: HU WENQUAN
Принадлежит:

Подробнее
22-03-2017 дата публикации

Feature extraction for radar

Номер: CN0106537181A
Автор: ROSE ALEC
Принадлежит:

Подробнее
09-01-2018 дата публикации

Genetic algorithm-based triangle grid model triangle folding simplification method

Номер: CN0107564088A
Принадлежит:

Подробнее
24-03-2020 дата публикации

Method and device for determining three-dimensional model of region

Номер: CN0110910504A
Автор:
Принадлежит:

Подробнее
08-12-2010 дата публикации

Method of and scaling unit for scaling a three-dimensional model

Номер: CN0001973304B
Принадлежит:

Подробнее
30-08-2006 дата публикации

Tridimensional image processing method

Номер: CN0001272749C
Принадлежит:

Подробнее
14-02-1986 дата публикации

PROCEDE POUR CREER ET MODIFIER UNE IMAGE SYNTHETIQUE

Номер: FR0002569020A
Принадлежит:

L'INVENTION CONCERNE UN PROCEDE POUR CREER ET SURTOUT MODIFIER LE CONTENU D'UNE MEMOIRE D'IMAGE7 ("MAPPED MEMORY") AU MOYEN D'OBJETS A REPRESENTER DECRITS DANS UNE MEMOIRE D'OBJETS, OU MAGASIN5. LORSQU'ON PLACE UN OBJET DANS L'IMAGE7 ON SAUVEGARDE DANS LE MAGASIN5 A L'ADRESSE DE LA DESCRIPTION DE L'OBJET ET PRENANT SA PLACE, LES ELEMENTS D'IMAGE QUI ETAIENT COMPRIS DANS LE PERIMETRE DE L'OBJET AVANT MISE EN PLACE DE CE DERNIER. POUR DEPLACER UN OBJET, IL SUFFIT DES LORS D'ECHANGER LES ELEMENTS D'IMAGE ENTRE LE MAGASIN ET L'IMAGE POUR RECREER L'IMAGE INITIALE SANS L'OBJET, PUIS REPLACER L'OBJET AILLEURS SELON LE MEME PROCESSUS D'ECHANGE. APPLICATION AUX JEUX VIDEO ET ORDINATEURS INDIVIDUELS.

Подробнее
27-04-1984 дата публикации

IMPROVEMENT WITH THE DISPLAY SYSTEMS OF THE TYPE HAS IMAGE ENGENDREE BY CALCULATOR

Номер: FR0002466061B1
Автор:
Принадлежит:

Подробнее
17-01-2020 дата публикации

METHOD OF SIMULATED DENTAL SITUATION

Номер: FR0003083898A1
Принадлежит:

Подробнее
10-11-1995 дата публикации

Device and process of simulation of an examination or an surgical operation carried out on one organesimulé

Номер: FR0002719690A1
Принадлежит:

Il s'agit d'un dispositif (1) et d'un procédé utilisant un dispositif de simulation d'une intervention sur un organe. Le dispositif comporte un binoculaire (3) d'observation de l'organe, des moyens de simulation d'une lampe à fente, des moyens de simulation (8) d'une optique de grossissement, un ou deux écrans vidéo miniaturisés solidaires du binoculaire, des moyens de calcul propres à générer une image vidéo de l'organe simulé à examiner et à projeter ladite image sur le ou lesdits écrans vidéo miniaturisés, et des moyens de simulation d'un verre d'examen, lesdits moyens de simulation et lesdits moyens de calcul étant agencés pour que le changement de l'un quelconque des paramètres relatifs au déplacement du binoculaire, au réglage de la lampe à fente, au grossissement et au verre d'examen, modifie l'image vidéo de l'organe simulé en temps réel ou sensiblement en temps réel.

Подробнее
27-03-1981 дата публикации

PERFECTIONNEMENT AUX SYSTEMES DE VISUALISATION DU TYPE A IMAGE ENGENDREE PAR CALCULATEUR

Номер: FR0002466061A
Принадлежит:

LE SYSTEME DE VISUALISATION EST DU TYPE A IMAGE ENGENDREE PAR CALCULATEUR POUR SIMULATEUR DE VOL BASE AU SOL POUR FOURNIR UNE VISUALISATION DE SURFACES TEXTUREES EN PERSPECTIVE. IL COMPORTE UN CALCULATEUR DE SIMULATION DE VOL 104 CONTROLE PAR LES REACTIONS DU PILOTE 100. CE CALCULATEUR ENTRAINE LE CONTROLE DU BALAYAGE 108 D'UNE SURFACE TEXTUREE CONTENUE DANS LA MEMOIRE 111 DONT LES DONNEES NUMERIQUES SONT CONVERTIES EN VALEURS ANALOGIQUES POUR LA PROJECTION PAR LE DISPOSITIF 119 DE L'IMAGE EN PERSPECTIVE 101. LE SYSTEME S'APPLIQUE AUSSI BIEN AUX VARIATIONS DE VISION DE NUIT QUE DE JOUR GRACE AU GENERATEUR DE VARIATION D'ECLAIREMENT 114.

Подробнее
04-11-2019 дата публикации

CURVED DISPLAY APPARATUS AND OPERATION METHOD THEREOF

Номер: KR0102021363B1
Автор:
Принадлежит:

Подробнее
14-03-2007 дата публикации

CONVERGENT MULTI-PHOTOGRAPHING SYSTEM, SPECIALLY WITH RESPECT TO APPLYING A JOINT CHARACTERIZATION ANALYSIS TECHNIQUE BY A DIGITAL IMAGE TO A TUNNEL BORING WORK

Номер: KR0100695018B1
Принадлежит:

PURPOSE: A convergent multi-photographing system is provided to suggest a correct method for analyzing joint characterization by identifying that a tunnel is affected by the joint characterization and investigating an effect a very dominant discontinuous surface such as a dislocation and a fracture zone. CONSTITUTION: A measuring unit measures relative spatial coordinates of three adjustment points set on a vertical work surface and two assistant ground points set on the ground. A photographing device(63) is placed in a position on two extended straight lines which connect a central point of the three adjustment points to the two assistant ground points. A photographing-device control apparatus(62) includes a fixing part which serves as a stand of the photographing device, a rotation part for making the fixing part and the photographing device integrally rotated to vertical and horizontal directions, and a control part for controlling a rotation degree of the rotation part. An image reception ...

Подробнее
25-09-2020 дата публикации

METHOD AND APPARATUS FOR GENERATING 3-DIMENSIONAL DATA OF MOVING OBJECT

Номер: KR0102160340B1
Автор:
Принадлежит:

Подробнее
15-12-2000 дата публикации

IMAGE GENERATION APPARATUS, IMAGE GENERATION METHOD, GAME MACHINE USING THE METHOD

Номер: KR0100276549B1
Принадлежит:

Подробнее
24-01-2007 дата публикации

PHOTOGRAPHING DEVICE AND METHOD FOR GENERATING IMAGE INCLUDING DEPTH INFORMATION BY USING VARIABLE FOCUS LENS

Номер: KR1020070010306A
Автор: JANG, BRENT
Принадлежит:

PURPOSE: A photographing device and a method for generating an image including depth information are provided to easily generate an image including depth information by using a plurality of 2D images having different focal distances, which are obtained by using a single variable focus lens. CONSTITUTION: A photographing device(100) includes a variable focus lens unit(110), an image processor(130), and an image storage unit(140). The variable focus lens unit includes a lens whose focal distance is controlled in response to a distance between an object and the lens. The image processor respectively extracts a plurality of objects respectively corresponding to different focal distances from a plurality of first images having the different focal distances, which are obtained from the same scene through the variable focus lens unit, and synthesizes the objects based on the focal distances so as to generate a second image including depth information. The image storage unit stores the first images ...

Подробнее
29-08-2018 дата публикации

EVOLVED MODELER USING TARGET DB

Номер: KR1020180096373A
Принадлежит:

Provided is an evolved modeler using a target DB, which derives the characteristics of a target by using a target DB and generates an infrared image for the target based on the characteristics. The evolved modeler according to the present invention comprises a case detection part detecting a case meeting an input condition; a material/property value estimation part estimating material information and property information based on the detected case; and a modeling part generating a three-dimensional model of the target based on input information, the material information, and the property information. COPYRIGHT KIPO 2018 (210) Case detection part (220) Material/property value estimation part (230) Infrared modeling part (240) Target database (AA) Input ...

Подробнее
23-03-2023 дата публикации

대상물 탐지장치가 탑재된 선박

Номер: KR20230040487A
Автор: 하영열
Принадлежит:

... 대상물 탐지장치가 탑재된 선박을 제공한다. 대상물 탐지장치가 탑재된 선박은, 선박에 설치되어 대상물에 대한 음향정보를 획득하는 음향획득모듈; 선박에 설치되어 대상물에 대한 영상정보를 획득하는 영상획득모듈; 및 상기 음향정보 및 상기 영상정보를 기반으로 대상물의 위치를 탐지하는 위치탐지모듈을 포함하되, 상기 음향획득모듈은 대상물에 대한 제1음향정보를 획득하는 제1음향처리부와, 상기 제1음향처리부와 이웃하며 대상물에 대한 제2음향정보를 획득하는 제2음향처리부를 포함하며, 상기 영상획득모듈은 대상물에 대한 제1영상정보를 획득하는 제1영상처리부와, 상기 제1영상처리부와 이웃하며 대상물에 대한 제2영상정보를 획득하는 제2영상처리부를 포함하며, 상기 위치탐지모듈은 상기 제1영상정보, 상기 제2영상정보, 상기 제1음향정보 및 상기 제2음향정보를 기반으로 대상물에 대한 탐지를 수행한다.

Подробнее
20-03-2018 дата публикации

sistema e método utilizando imagens digitais tri- e bidimensionais

Номер: BR112012010225A2
Принадлежит:

Подробнее
02-09-2021 дата публикации

APPARATUS, METHOD, AND SYSTEM FOR PROVIDING A THREE-DIMENSIONAL TEXTURE USING UV REPRESENTATION

Номер: WO2021173489A1
Принадлежит:

An approach is provided for generating three-dimensional textures (3D) using UV map representation. The approach, for example, involves receiving at least one input image depicting a subject. The at least one image comprises a standard representation of the subject. The approach also involves determining at least one depth representation of the at least one input image. The approach further involves causing, at least in part, a creation of a UV map representation of the subject based on the at least one input image and the at least one depth representation. The UV map representation includes, for instance, a UV map-geometry representation of a three dimensional shape of the subject and a UV map-visual representation of at least one visual characteristic (e.g., texture) of the at least one subject.

Подробнее
21-10-2021 дата публикации

APPARATUS AND METHOD FOR PROCESSING POINT CLOUD INFORMATION

Номер: WO2021210725A1
Принадлежит:

An apparatus and a method for processing point cloud information are disclosed. An apparatus for processing point cloud information, according to one embodiment, comprises: a point cloud information acquisition unit for acquiring 3D point cloud information about a 3D space; an additional information acquisition unit for acquiring at least one additional image obtained by photographing at least a part of the 3D space, and direction information indicating the direction of gravity in a coordinate system in which the at least one additional image is captured; and a processing unit for transforming, on the basis of the at least one additional image and the direction information, the coordinate system of the 3D point cloud information so that one axis thereof coincides with the direction of gravity, and displaying the 3D point cloud information by using the transformed coordinate system of the 3D point cloud information.

Подробнее
03-01-2002 дата публикации

SYSTEM AND METHOD FOR MEDIAN FUSION OF DEPTH MAPS

Номер: WO0000201502A3
Автор: NISTER, David
Принадлежит:

The present invention is directed toward providing a system for constructing a graphical representation of an image, as perceived from a desired view, from a plurality of calibrated views. An optimum, median fused depth map of a new, desired view of an object or a scene is constructed from a plurality of depth maps of the object or scene, utilizing median values for each of the image pixels within the image domain of the new view. Each of the known depth maps are rendered into the new view, and the pixels of the rendered image are processed one at a time until a median value for each pixel in the new view of the image of the object or scene is calculated. The calculated median values are then assembled into a median fused depth map of the new view of the object or scene, with the median fused depth maP available to construct a new two or three dimension images and models of the object or scene as perceived from the new view.

Подробнее
21-03-2002 дата публикации

MULTIDIMENSIONAL DATABASE ANALYSIS TOOL

Номер: WO0002023402A3
Автор: MITCHELL, Althea
Принадлежит:

The invention includes a multidimensional database analysis tool. The tool may be formed by amethod that includes gathering data for objects, each object having three variables and an object image based on one of the three variables. Data may be supplied to the three variables of each object where the three variables of each object may be associated as individual members of a coordinate number set. The coordinate number set of each object may be compared with the coordinate number set of each remaining object so as to produce an outcome. From this outcome, the objects may be incorporated into a three dimensional object cluster at a particular coordinate within the object cluster.

Подробнее
12-06-1997 дата публикации

IMAGE GENERATION APPARATUS, IMAGE GENERATION METHOD, GAME MACHINE USING THE METHOD, AND MEDIUM

Номер: WO1997021194A1
Принадлежит:

Since the viewpoint of the picture on a display screen and enemies move simply in conventional shooting game machine, the pictures of games are not varied and consequently the games are not exciting. A game machine of this invention includes image generation means for selecting one of a plurality of enemies moving in the game space and generating an image where this enemy is viewed from a viewpoint in the three-dimensional virtual space, another image generation means for executing processings for attacking the enemy in accordance with the operation of a gun unit, and viewpoint moving processing means for detecting the condition of the enemy and controlling the movement of the viewpoint.

Подробнее
27-10-2005 дата публикации

GHOST ARTIFACT REDUCTION FOR RENDERING 2.5D GRAPHICS

Номер: WO2005101324A1
Принадлежит:

An image processing system for performing a transformation of an input image associated with an input viewpoint to an output image associated with an output viewpoint. The input image is a pre-filtered 2D representation of 3D objects as seen from the input viewpoint, and comprises for each input pixel an associated input pixel value and an associated input pixel depth. Additional to the input image a hidden image is received, being another 2D representation of the 3D objects and comprising information, which information is occluded from the input viewpoint. The system comprises a video processor being operative to create the output image by transforming each input pixel to a transformed input pixel. The transformation is a function of the input pixel depth. The output image is created, based on the transformed input pixels, using hidden image pixels for filling de-occluded areas and for at least one pixel position adjacent to the de-occluded areas. As a consequence ghost line artifacts, ...

Подробнее
29-04-2021 дата публикации

METHOD AND SYSTEM FOR SYNTHESIZING NOVEL VIEW IMAGE ON BASIS OF MULTIPLE 360 IMAGES FOR 6-DEGREES OF FREEDOM VIRTUAL REALITY

Номер: WO2021080096A1
Принадлежит:

The present invention relates to a method and a system for synthesizing a novel view image on the basis of multiple 360 images for 6-degrees of freedom (DoF) virtual reality, wherein multiple 360 images are used to construct a large-scale 6-DoF virtual environment and synthesize a scene at a new viewpoint. The method comprises the steps of: performing a 3D reconfiguration procedure on 360 images to reconstruct 3D geometric information and reconfigure a virtual data map integrating the multiple 360 images into one image; using a reference image closest to a viewpoint extracted from the virtual data map to apply a projection & vertex warping view synthesis algorithm so as to create view images corresponding to a user viewpoint; and mixing the view images for 6-DoF through a section formula relating to internal partition based on the distance between the position of the reference image and the position of the viewpoint.

Подробнее
12-11-1998 дата публикации

METHOD FOR IMAGE PROCESSING

Номер: WO1998050889A1
Автор: HOYDAL, Finn
Принадлежит:

A method for image processing, especially for converting a two-dimensional image of a three-dimensional real subject into a three-dimensional representation of the same three-dimensional real subject, wherein the subject is composed of elements, each of which is represented by a pixel in the two-dimensional image. The image's colours are subjected to an analysis with regard to colour quality, and the individual colour points' orientation in space is localised by utilizing a colour temperature scale, a colour saturation scale and a contrast scale, with the result that each colour obtains its perspective place relative to the image's other colours.

Подробнее
11-02-1999 дата публикации

METHOD AND APPARATUS FOR ATTRIBUTE INTERPOLATION IN 3D GRAPHICS

Номер: WO1999006957A1
Принадлежит:

An image processing method and apparatus are described for rendering two-dimensional pixel images composed of triangular image primitives. Prior to their projection into the image plane, each triangle is parameterised with a respective two-dimensional coordinate system with the coordinate axes (s, t) concurrent with respective edges of the triangle and the origin (0,0) coincident with the vertex (V.0) between those edges. A generalised interpolation function, applied in terms of the parameterising (s, t) coordinate system, determines parameter values at positions (P) within the triangle in terms of the two-dimensional coordinate system. These parameter values determine contributions from stored values for one or more attributes, such as surface normal or texturing, stored for each vertex, to give attribute values at each pixel. In a final stage, the per pixel attribute values from all triangles are used to jointly determine an output colour for each pixel.

Подробнее
10-05-2001 дата публикации

IMPROVEMENTS RELATING TO COMPUTER GRAPHICS

Номер: WO2001033509A1
Автор: CLARKE, Timothy, John
Принадлежит:

La présente invention concerne un procédé et un dispositif permettant de produire une représentation graphique d'au moins une partie d'un objet se trouvant dans une partie de l'espace subdivisée hiérarchiquement à partir d'une pluralité de représentations graphiques enregistrées correspondant aux sous-parties de l'espace de tailles différentes. Le procédé comprend les étapes suivantes: détermination d'une distance de visualisation entre l'objet et un point de visualisation à partir duquel l'objet doit être visualisé; et utilisation de la distance de visualisation déterminée pour sélectionner, à partir d'une pluralité de représentations graphiques enregistrées, une représentation graphique d'une sous-partie de l'espace dans laquelle se trouve au moins une partie de l'objet, et la taille de la sous-partie sélectionnée correspondant à la distance de visualisation calculée, le niveau de détail de l'image graphique de la/des partie(s) de l'objet à produire étant déterminé. Le procédé trouve ...

Подробнее
12-07-2018 дата публикации

MIXED-REALITY ARCHITECTURAL DESIGN ENVIRONMENT

Номер: US20180197341A1
Принадлежит: Dirtt Environmental Solutions, Ltd.

A computer system for managing multiple distinct perspectives within a mixed-reality design environment loads a three-dimensional architectural model into memory. The three-dimensional architectural model is associated with a virtual coordinate system. The three-dimensional architectural model comprises at least one virtual object that is associated with an independently executable software object that comprises independent variables and functions that are specific to a particular architectural element that is represented by the at least one virtual object. The computer system associates the virtual coordinate system with a physical coordinate system within a real-world environment. The computer system transmits to each device of multiple different devices rendering information. The rendering information comprises three-dimensional image data for rendering the three-dimensional architectural model and coordinate information that maps the virtual coordinate system to the physical coordinate ...

Подробнее
18-12-2007 дата публикации

Efficient graphics pipeline with a pixel cache and data pre-fetching

Номер: US0007310100B2

An efficient graphics pipeline with a pixel cache and data pre-fetching. By combining the use of a pixel cache in the graphics pipeline and the pre-fetching of data into the pixel cache, the graphics pipeline of the present invention is able to take best advantage of the high bandwidth of the memory system while effectively masking the latency of the memory system. More particularly, advantageous reuse of pixel data is enabled by caching, which when combined with pre-fetching masks the memory latency and delivers high throughput. As such, the present invention provides a novel and superior graphics pipeline over the prior art in terms of more efficient data access and much greater throughput. In one embodiment, the present invention is practiced within a computer system having a processor for issuing commands; a memory sub-system for storing information including graphics data; and a graphics sub-system for processing the graphics data according to the commands from the processor. The graphics ...

Подробнее
11-12-2008 дата публикации

EXTRAPOLATION OF NONRESIDENT MIPMAP DATA USING RESIDENT MIPMAP DATA

Номер: US20080303841A1
Принадлежит:

A multi-threaded graphics processor is configured to use to extrapolate low resolution mipmaps stored in physical memory to produce extrapolated texture values while high resolution nonresident mipmaps are retrieved from a high latency storage resource and converted into resident mipmaps. The extrapolated texture values provide an improved image that appears sharper compared with using the low resolution mipmap level texture data in place of the temporarily unavailable high resolution mipmap level texture data. An extrapolation threshold LOD is used to determine when extrapolated magnification or minification texture filtering is used. The extrapolation threshold LOD may be used to smoothly transition from using extrapolated filtering to using interpolated filtering when a nonresident mipmap is converted to a resident mipmap.

Подробнее
02-01-1996 дата публикации

Method and apparatus for displaying a line passing through a plurality of boxes

Номер: US0005481658A1
Автор: Megiddo; Nimrod

A method and apparatus determine a line that passes through a set of rectangular, axial boxes defined by vertices in n-dimensional space in O(n) time using linear programming methods to obtain solutions, if they exist. The line is easily converted to a parametric representation by a suitable change of variables and is displayed in a two-dimensional representation. The method and apparatus are especially suited to the digital computer representation of objects as boxes and the problem of finding a line-of-sight through the boxes.

Подробнее
28-04-1998 дата публикации

System and method for rendering images

Номер: US0005745636A1
Принадлежит: GenTech Corp.

An image processing system comprising and image recording system and an image rendering system. The image recording system records images of a scene, and comprises a recording device for recording images of the scene on a series of frames, each frame including image information reflecting the scene as illuminated at the time the frame was recorded, a plurality of individually-energizable light sources each for illuminating the scene; and a synchronizer connected to the recording device and the light sources for synchronizing the separate energization of the light sources and the recording by the recording device of the separate frames in the series. The image rendering system generates a rendered image which reflects a desired light source position. The image rendering system specifically comprises a frame store for storing the image information for each of the series of frames, a rendered image store for receiving rendered image information a rendered image a rendered image information ...

Подробнее
31-10-2000 дата публикации

Three-dimensional image processing apparatus with enhanced automatic and user point of view control

Номер: US0006139434A1
Принадлежит: Nintendo Co., Ltd.

A video game system includes a game cartridge which is pluggably attached to a main console having a main processor, a coprocessor, expandable main memory and player controllers. A multifunctional peripheral processing subsystem external to the game microprocessor and coprocessor is described which executes commands for handling player controller input/output to thereby lessen the processing burden on the graphics processing subsystem. The video game methodology features camera perspective or point of view control features. The system changes the "camera" angle (i.e., the displayed point of view in the three-dimensional world) automatically based upon various conditions and in response to actuation of a plurality of distinct controller keys/buttons/switches, e.g., four "C" buttons in the exemplary embodiment. The control keys allow the user at any time to move in for a close up or pull back for a wide view or pan the camera to the right and left to change the apparent camera angle. Such ...

Подробнее
04-08-1987 дата публикации

Method and apparatus for combining multiple video images in three dimensions

Номер: US0004684990A1
Автор: Oxley; Leslie J.
Принадлежит: Ampex Corporation

A plurality of input video signals (including background) are combined in accordance with priority. Each video signal comprises data samples corresponding to respective discrete locations on a viewing plane. The signals are preferably from an ADO transformation system wherein such data samples correspond to elements of an image lying in an image plane displaced from the viewing plane. Input key signals correspnding to respective locations are associated with respective input signals. Priority is shown determined from respective sets of plane defining signals as a sequence of depth signals corresponding to the depth coordinates of the respective image plane at the respective locations. The depth signals are used to produce respective weighting signals. The weighting signals and respective input key signals are used to produce in respect to each input video signal a set of processed key signals corresponding to respective coordinate locations on the viewing plane in the respective sequence ...

Подробнее
18-05-1999 дата публикации

Method and apparatus for adaptive nonlinear projective rendering

Номер: US5905500A
Автор:
Принадлежит:

In three-dimensional graphics rendering, a method of texture mapping, or shading, applies to triangle-based graphical objects having undergone a perspective transformation. The present invention makes use of linear interpolation for determining the appropriate mapping for the interior points of each triangle, thus reducing the computation-intensive mathematical calculations otherwise required. In order to minimize visual artifacts due to high interpolation errors, the borders of each triangle are tested against a predetermined threshold, and the triangle subdivided if any of the borders contain a maximum error which exceeds the threshold. The subdivision continues until all triangle sides have maximum errors that are less than the threshold value. Linear interpolation is then used to determine all mappings for the sides and interior points of the triangle. In alternative embodiments, the triangle is subdivided without using recursive methods. In one non-recursive method, the entire triangle ...

Подробнее
18-10-2018 дата публикации

APPARATUS AND METHOD FOR EFFICIENTLY MERGING BOUNDING VOLUME HIERARCHY DATA

Номер: US20180300939A1
Принадлежит:

An apparatus and method for efficiently reconstructing a BVH. For example, one embodiment of a method comprises: constructing an object bounding volume hierarchy (BVH) for each object in a scene, each object BVH including a root node and one or more child nodes based on primitives included in each object; constructing a top-level BVH using the root nodes of the individual object BVHs; performing an analysis of the top-level BVH to determine whether the top-level BVH comprises a sufficiently efficient arrangement of nodes within its hierarchy; and reconstructing at least a portion of the top-level BVH if a more efficient arrangement of nodes exists, wherein reconstructing comprises rebuilding the portion of the top-level BVH until one or more stopping criteria have been met, the stopping criteria defined to prevent an entire rebuilding of the top-level BVH.

Подробнее
03-10-2019 дата публикации

SYSTEMS FOR SECURE COLLABORATIVE GRAPHICAL DESIGN USING SECRET SHARING

Номер: US20190303620A1
Принадлежит:

Systems and methods are disclosed for secret sharing for secure collaborative graphical design. Graphical secret shares are generated from a three-dimensional graphical design and distributed to one or more contributor devices. Contributor graphical designs modifying graphical secret shares may be received from contributor devices. Various corresponding and related systems, methods, and software are described.

Подробнее
03-11-2020 дата публикации

Previewing 3D content using incomplete original model data

Номер: US0010825234B2
Принадлежит: RESONAI INC., RESONAI INC, Resonai Inc.

A preview system previews 3D content without providing complete 3D model content data. The preview system includes at least one processor that receives a request for a 3D model for preview. The processor generates at least one representation of the 3D model based on a portion of the 3D model content data associated with the requested 3D model. The processor generates a preview scene by combining the at least one representation of the 3D model with a scene from a user image environment. The preview scene included incomplete 3D model content data. The processor outputs the preview scene for display in the user image environment.

Подробнее
18-04-2013 дата публикации

Apparatus and method for correcting lesion in image frame

Номер: US20130094766A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus for extracting a candidate image frame includes a generating unit configured to generate at least one lesion value that represents a characteristic of a lesion included in each of a plurality of 2-dimensional image frames that form a 3-dimensional image, and an extracting unit configured to extract, from the image frames, at least one candidate image frame usable for correcting a boundary of the lesion based on the at least one lesion value.

Подробнее
23-05-2013 дата публикации

DISPLAY APPARATUS AND DISPLAY METHOD THEREOF

Номер: US20130127843A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A display method and apparatus are provided. The method includes selecting at least one content to be focused of the plurality of content being displayed, adjusting a disparity value so that the selected content has a different depth value from other content, and displaying the content of which the disparity value has been adjusted. 1. A method of displaying a plurality of content , the method comprising:selecting at least one content to be focused of the plurality of content being displayed;adjusting a disparity value and changing the selected content to a different depth value from other content; anddisplaying the content of which the disparity value has been adjusted.2. The method as claimed in claim 1 , wherein adjusting the disparity value further comprises:calculating crossed disparity values of the plurality of content; andsetting the crossed disparity value of the selected content to be different from the calculated crossed disparity value.3. The method as claimed in claim 2 , wherein the crossed disparity value is a distance for which a left-eye image and a right-eye image claim 2 , which constitute the selected content claim 2 , are spaced apart in left and right directions from a reference position claim 2 , and the reference position is an average position of the left-eye image and the right-eye image on an x-axis.4. The method as claimed in claim 4 , wherein adjusting the disparity value sets the crossed disparity value of the selected content to be larger than the calculated crossed disparity value so that the selected content appears to project in a user direction.5. The method as claimed in claim 4 , wherein adjusting the disparity value moves the left-eye image from the reference position so that the left-eye image has a larger value than the calculated crossed disparity value claim 4 , and moves the right-eye image from the reference position so that the right-eye image has a smaller value than the calculated crossed disparity value when the ...

Подробнее
18-07-2013 дата публикации

Apparatus and Method for Processing Three-Dimensional Image

Номер: US20130181984A1
Принадлежит: MStar Semiconductor, Inc.

An apparatus for processing a three-dimensional (3D) image is provided. The apparatus includes a motion estimation module and a motion interpolation module. The motion estimation module estimates a motion vector between a first object in a first-eye image and a second object in a second-eye image. The first object is the same as or similar to the second object. The motion interpolation module multiplies the motion vector by a first shift ratio to generate a first motion vector. The motion interpolation module generates a shifted first object by interpolation according to the first motion vector and the first object. 1. A three-dimensional (3D) image processing apparatus , comprising:a motion estimation module, for estimating a motion vector between a first object in a first-eye image and a second object in a second-eye image, the first object being identical or similar to the second object; anda motion interpolation module, for multiplying the motion vector by a first shift ratio to generate a first motion vector, and generating a modified first-eye image by interpolation according to the first motion vector and the first-eye image.2. The apparatus according to claim 1 , wherein the motion interpolation module further multiplies the motion vector by a second shift ratio to generate a second motion vector claim 1 , and generates a modified second-eye image by interpolating the second motion vector and the second-eye image.3. The apparatus according to claim 2 , wherein the first shift ratio and the second shift ratio are set to be the same or different.4. The apparatus according to claim 2 , wherein the modified first-eye image comprises a shifted first object corresponding to the first object claim 2 , the modified second-eye image comprises a shifted second object corresponding to the second object claim 2 , the first-eye image is a left-eye image claim 2 , and the second-eye image is a right-eye image; in response to a zoom-in request claim 2 , the motion ...

Подробнее
19-09-2013 дата публикации

Transposing apparatus, transposing method, and computer product

Номер: US20130241924A1
Автор: Tomoki Katou
Принадлежит: Fujitsu Ltd

A transposing apparatus is configured by a computer controlling a computing device having computing elements arranged into a matrix and memory devices connected to the computing elements. The computing device executes an electromagnetic field analysis process on latticed three-dimensional analysis subject data present in a three-dimensional coordinate system. The computer is configured to detect the number of lined-up lattices in a direction of a first axis, in a direction of a second axis, and in a direction of a third axis of the coordinate system, through detection on the three-dimensional analysis subject data; transpose a group of lattices of the three-dimensional analysis subject data, based on the detected numbers of lined-up lattices and on the number of lined-up computing elements in a row direction and in a column direction; and output to the computing device, the three-dimensional analysis subject data transposed.

Подробнее
26-09-2013 дата публикации

Point cloud data hierarchy

Номер: US20130249899A1
Принадлежит: Willow Garage LLC

One embodiment is directed to a method for presenting views of a very large point data set, comprising: storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; automatically and deterministically organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy.

Подробнее
21-11-2013 дата публикации

Three-Dimensional Display of Specifications in a Scalable Feed Forward Network

Номер: US20130307849A1
Принадлежит: The Boeing Company

Technologies are described herein for generating a three-dimensional display. Some technologies are adapted to retrieve a model defining a feed-forward network related to a development process. The technologies generate a first three-dimensional shape representing each internal product according to the model. The technologies also generate a second three-dimensional shape representing each dependency of each internal product corresponding to each first three-dimensional shape. The technologies further generate a third three-dimensional shape representing each component of each dependency corresponding to each second three-dimensional shape. 1. A computer-implemented method for generating a three-dimensional display , the method comprising computer-implemented operations for:retrieving a model defining a feed-forward network related to a development process;generating a first three-dimensional shape representing each internal product according to the model;generating a second three-dimensional shape representing each dependency of each internal product corresponding to each first three-dimensional shape; andgenerating a third three-dimensional shape representing each component of each dependency corresponding to each second three-dimensional shape.2. The computer-implemented method of claim 1 , wherein the first three-dimensional shape comprises a rectangular cuboid.3. The computer-implemented method of claim 1 , wherein each internal product comprises a product aggregate or a aggregated product.4. The computer-implemented method of claim 1 , wherein the second three-dimensional shape comprises a cone.5. The computer-implemented method of claim 1 , wherein the third three-dimensional shape comprises a disk-shaped cylinder.6. The computer-implemented method of claim 1 , further comprising computer-implemented operations for:generating first lines connecting each second three-dimensional shape to the corresponding third three-dimensional shapes;generating second lines ...

Подробнее
13-02-2014 дата публикации

FACETTED BROWSING

Номер: US20140043325A1
Принадлежит:

Concepts and technologies are described herein for facetted browsing. In accordance with the concepts and technologies disclosed herein, data can be obtained at a computer system. The data can include data values and geographic information. The computer system can generate a geospatial visualization of the data based, at least partially, upon the data values and the geographic location information. The computer system can also generate an overlay visualization of the data based, at least partially, upon the data values. The computer system can also output the geospatial visualization and the overlay visualization. 1. A computer-implemented method for facetted browsing , the computer-implemented method comprising performing computer-implemented operations for:obtaining, at a computer system executing a visualization component, data including data values and geographic location information;generating, by the computer system, a geospatial visualization of the data based, at least partially, upon the data values and the geographic location information;outputting, by the computer system, the geospatial visualization;generating, by the computer system, an overlay visualization of the data based, at least partially, upon the data values; andoutputting, by the computer system, the overlay visualization.2. The method of claim 1 , wherein the data further comprises temporal data.3. The method of claim 1 , wherein the geospatial visualization comprises a three-dimensional visualization and the overlay visualization comprises a two-dimensional visualization.4. The method of claim 1 , further comprising:receiving, at the computer system, an input via the overlay visualization to perform an action; andin response to the input, updating, by the computer system, the geospatial visualization.5. The method of claim 4 , wherein the action comprises a brushing action performed over a portion of the overlay visualization claim 4 , and wherein updating the geospatial visualization ...

Подробнее
20-02-2014 дата публикации

DATA PLOT PROCESSING

Номер: US20140049538A1
Принадлежит:

A method, system, and/or computer program product processes a data plot comprising a plurality of data points for inclusion of additional information content. A space of the data plot is divided into subspaces, where each subspace contains at least one data point of the data plot. An available area for each subspace is computed, and then a compressed information representation for each subspace is computed based on information about said at least one data point contained in said each subspace and a computed available area for said each subspace. 1. A method of processing a data plot comprising a plurality of data points for inclusion of additional information content , the method comprising:dividing, by one or more processors, a space of the data plot into subspaces, wherein each subspace contains at least one data point of the data plot;computing, by one or more processors, an available area for each subspace; andcomputing, by one or more processors, a compressed information representation for each subspace based on information about said at least one data point contained in said each subspace and a computed available area for said each subspace.2. The method of claim 1 , further comprising:dividing, by one or more processors, the space of the data plot into non-overlapping subspaces using a Voronoi decomposition, wherein, for each subspace, a Voronoi site for the subspace is based on said at least one data point contained in said each subspace.3. The method of claim 1 , further comprising:in response to the size of a subspace being smaller than a predetermined value, merging, by one or more processors, said each subspace with a neighbouring subspace.4. The method claim 3 , further comprising:identifying, by one or more processors, subspaces that have a size smaller than the predetermined value; andmerging, by one or more processors, the subspaces determined to have a size smaller than the predetermined value with the neighbouring subspace, wherein an order of ...

Подробнее
06-03-2014 дата публикации

APPARATUS AND METHOD FOR DISPLAY

Номер: US20140063005A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Display apparatus and method are provided. The display apparatus may include a receiver for receiving an image; a grouper for analyzing the received image and grouping a plurality of frames of the received image based on the analysis; a depth allocator for determining at least two key frames from a plurality of frames grouped into at least one group, and allocating a depth per object in the determined key frames; and an image generator for generating a 3D image corresponding to other frames excluding the key frames based on a depth value allocated to the key frames. Hence, the display apparatus can allocate the depth value of a higher quality to the object in the frames of the received image. 1. A display apparatus comprising:a receiver to receive an image;a grouper to analyze the received image and to group a plurality of frames of the received image based on the analysis into at least one group;a depth allocator to determine at least two key frames from the plurality of frames of the at least one group, and to allocate a depth value per object in the determined key frames; andan image generator to generate a 3D image corresponding to other frames excluding the key frames based on the depth value allocated to the key frames using at least one processor.2. The display apparatus of claim 1 , further comprising:an image analyzer to detect motion information of an object in the frames grouped into the at least one group,wherein the image generator generates the 3D image corresponding to the other frames excluding the key frames based on the detected motion information and the depth value allocated to the key frames.3. The display apparatus of claim 2 , wherein the image generator comprises:a position determiner to determine an object position in the other frames based on the detected motion information;a frame generator to estimate a depth value of the positioned object based on the depth value allocated to the key frames, and to generate the 3D image frame ...

Подробнее
13-03-2014 дата публикации

Multi-core geometry processing in a tile based rendering system

Номер: US20140071122A1
Автор: John W. Howson
Принадлежит: Imagination Technologies Ltd

A method and an apparatus are provided for combining multiple independent tile-based graphic cores. A block of geometry, containing a plurality of triangles, is split into sub-portions and sent to different geometry processing units. Each geometry processing unit generates a separate tiled geometry list that contains interleave markers that indicate an end to a sub-portion of a block of geometry overlapping a particular tile, processed by that geometry processing unit, and an end marker that identifies an end to all geometry processed for a particular tile by that geometry processing unit. The interleave markers are used to control an order of presentation of geometry to a hidden surface removal unit for a particular tile, and the end markers are used to control when the tile reference lists, for a particular tile, have been completely traversed.

Подробнее
20-03-2014 дата публикации

Virtual 3D Paper

Номер: US20140078135A1
Автор: KA YAN SO
Принадлежит: SKY88 TECHNOLOGY LIMITED

The invention discloses a virtual 3D paper, comprising data reader () for obtaining data, a multi touch gesture recognition engine () for receiving and recognizing multi touch signals, an event dispatching engine () for dispatching events according to the action of multi touch gesture recognition engine (), an editing module () for editing data obtained by data reader (), a rendering module () for rendering data edited by editing module (), a display monitor () for displaying the rendered results of rendering module (), and data exporter () for exporting the rendered results. The virtual 3D paper supports multi-point touch, may recognize kinds of gestures and read different types of files, and thus is more practical, more real and has a much better user experience. 112324154657. virtual 3D paper , wherein it comprises data reader () for obtaining data , multi touch gesture recognition engine () for receiving and recognizing multi-point touch signals , event dispatching engine () for dispatching events in accordance with the action of multi touch gesture recognition engine () , editing module () for editing data obtained by data reader () , rendering module () for rendering data edited by editing module () , display monitor () for displaying the rendered results of rendering module () and data exporter () for exporting the rendered results.244142. The virtual 3D paper of claim 1 , wherein the editing module () comprises content editor () for editing content claim 1 , and graphic editor () for editing graphics.389. The virtual 3D paper of claim 1 , wherein the virtual 3D paper also comprises geometry transformation engine() for geometrically transforming graphics and geometry deformation engine () for geometrically deforming for graphics.455152. The virtual 3D paper of claim 3 , wherein the rendering module () comprises live content rendering engine () for rendering content and 3D graphic rendering engine () for 3D rendering graphics.510. The virtual 3D paper of claim ...

Подробнее
01-01-2015 дата публикации

SPACE CARVING BASED ON HUMAN PHYSICAL DATA

Номер: US20150002507A1
Принадлежит:

Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images. 1. A method for three dimensional (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system comprising:identifying by one or more processors one or more navigable paths traversed by one or more users wearing the NED system in a user environment based on sensor data from one or more sensors on the near-eye display (NED) system;merging overlapping portions of the one or more navigable paths traversed by the one or more users; andstoring position and spatial dimensions for the one or more navigable paths as carved out space in human space carving data in a 3D space carving model of the user environment.2. The method of further comprising retrieving a stored 3D mapping of the user environment and relating positions of the carved out space to the retrieved 3D mapping.3. The method of further comprising generating a 3D space carved mapping of the user environment.4. The method of wherein generating a 3D space carved mapping of the user environment further comprises:detecting one or more object boundaries by the one or more processors by distinguishing carved out space and uncarved space based on the human space ...

Подробнее
06-01-2022 дата публикации

SYSTEM AND METHOD FOR EFFICIENT MULTI-GPU RENDERING OF GEOMETRY BY SUBDIVIDING GEOMETRY

Номер: US20220005146A1
Автор: Cerny Mark E.
Принадлежит:

A method for graphics processing. The method including rendering graphics for an application using graphics processing units (GPUs). The method including using the plurality of GPUs in collaboration to render an image frame including a plurality of pieces of geometry. The method including during the rendering of the image frame, subdividing one or more of the plurality of pieces of geometry into smaller pieces, and dividing the responsibility for rendering these smaller portions of geometry among the plurality of GPUs, wherein each of the smaller portions of geometry is processed by a corresponding GPU. The method including for those pieces of geometry that are not subdivided, dividing the responsibility for rendering the pieces of geometry among the plurality of GPUs, wherein each of these pieces of geometry is processed by a corresponding GPU. 1. A method for graphics processing , comprising:rendering graphics for an application using a plurality of graphics processing units (GPUs);using the plurality of GPUs in collaboration to render an image frame including a plurality of pieces of geometry;during the rendering of the image frame, subdividing one or more of the plurality of pieces of geometry into smaller pieces, and dividing the responsibility for rendering these smaller portions of geometry among the plurality of GPUs, wherein each of the smaller portions of geometry is processed by a corresponding GPU, and;for those pieces of geometry that are not subdivided, dividing the responsibility for rendering the pieces of geometry among the plurality of GPUs, wherein each of these pieces of geometry is processed by a corresponding GPU.2. The method of claim 1 , wherein a process for the rendering the image frame includes a geometry analysis phase of rendering claim 1 , or a Z pre-pass phase of rendering claim 1 , or a geometry pass phase of rendering.3. The method of claim 2 , further comprising:during the geometry analysis phase of rendering, or Z pre-pass phase of ...

Подробнее
07-01-2016 дата публикации

METHOD AND DEVICE FOR ENRICHING THE CONTENT OF A DEPTH MAP

Номер: US20160005213A1
Принадлежит:

A method and device for enriching the content associated with a first element of a depth map, the depth map being associated with a scene according to a point of view. Thereafter, at least a first information representative of a variation of depth in the first element in the space of the depth map is stored into the depth map. 115-. (canceled)16. A method for generating a depth map associated with a scene , wherein depth information being associated with each first element of a plurality of first elements of the depth map , the method comprising storing at least a first information in the depth map in addition to the depth information , said at least a first information being associated with said each first element and representative of a variation of depth in said each first element in the space of the depth map.17. The method according to claim 16 , wherein the at least a first information is established from a single surface element of the scene.18. The method according to claim 17 , wherein the at least a first information is established from said depth information associated with said each first element and from depth information associated with at least a second element claim 17 , said each first element and the at least a second element belonging to said single surface element of the scene projected into the depth map.19. The method according to claim 18 , wherein said each first element and the at least a second element are adjacent.20. The method according to claim 18 , wherein the at least a first information is established by computing the ratio of the difference between the depth information associated with said each first element and the depth information associated with the at least a second element to the distance between said each first element and the at least a second element.21. The method according to claim 17 , wherein the at least a first information is established from an equation of said single surface element of the scene projected into the ...

Подробнее
13-01-2022 дата публикации

SYSTEMS AND METHODS FOR ADAPTIVE VISUAL AND TEMPORAL QUALITY OF TIME-DYNAMIC (4D) VOLUME RENDERING

Номер: US20220012937A1
Принадлежит:

Systems, methods, devices, and non-transitory media of various embodiments enable rendering of a time-dynamic (4D) volume dataset. Various embodiments may provide a method for responsive and high quality rendering of time-dynamic hierarchical level-of-detail voxel datasets. Various embodiments may provide a prioritization system that balances visual quality and temporal responsiveness even with slow network or filesystem speeds. Various embodiments may provide a compact and efficient storage format for time-dynamic and mixed-resolution voxel rendering on a graphics processing unit (GPU). 1. A method for rendering at least a portion of a time-dynamic (4D) volume dataset on a two-dimensional (2D) display , comprising:requesting one or more keyframe nodes associated with one or more spatial nodes in a sparse voxel octree of the 4D volume dataset based at least in part on a keyframe node prioritization, wherein the keyframe node prioritization is based at least in part on a screen-space-error (SSE) priority value, a temporal priority value, and a random selection priority value;storing received keyframe node data in a three-dimensional (3D) texture atlas storing voxel data of the 4D volume dataset;populating an array encoded sparse voxel octree of spatial nodes to be rendered with two keyframe nodes per spatial node to be rendered from the 3D texture atlas; andsending the array encoded sparse voxel octree for rendering on the 2D display.2. The method of claim 1 , wherein requesting the one or more keyframe nodes associated with the one or more spatial nodes in the sparse voxel octree of the 4D volume dataset based at least in part on the keyframe node prioritization comprises:determining for each spatial node in the sparse voxel octree a respective list of one or more keyframe nodes, wherein each keyframe node in each respective list references a unique point in time and has a same spatial location and same level of detail as its associated spatial node;storing the ...

Подробнее
04-01-2018 дата публикации

DEVICES AND METHODS FOR GENERATING ELEMENTARY GEOMETRIES

Номер: US20180005427A1
Принадлежит:

Elementary geometries for rendering objects of a 3D scene are generated from input geometry data sets. Instructions of a source program are transformed into a code executable in a rendering pipeline by at least one graphics processor, by segmenting the source program into sub-programs, each adapted to process the input data sets, and by ordering the sub-programs in function of the instructions. Each ordered sub-program is configured in the executable code for being executed only after the preceding sub-program has been executed for all input data sets. Launching the execution of instructions to generate elementary geometries includes determining among the sub-programs a starting sub-program, deactivating all sub-programs preceding it and activating it as well as all sub-programs following it. Modularity is thereby introduced in generating elementary geometries, allowing time-efficient lazy execution of grammar rules. 1. An execution pipeline device comprising at least one graphics processor configured to launch the execution of instructions adapted to generate elementary geometries usable for rendering at least one object of a 3D scene , from input geometry data sets ,said instructions being grouped into at least two ordered sub-programs, each comprising a part of said instructions and being adapted to process said input geometry data sets according to rules associated with a node of a dataflow graph, and each of said sub-programs that follows a preceding of said sub-programs being arranged for being executed only after said preceding sub-program has been executed for all said input geometry data sets,said execution pipeline device further comprising a transform feedback module-implementing a transform feedback mechanism configured to associate each of said ordered sub-programs with a respective Vertex Buffer Object in order to execute said sub-programs that follows a preceding of said sub-programs being arranged for being executed only after said preceding sub- ...

Подробнее
02-01-2020 дата публикации

IMAGING SYSTEM AND METHOD PROVIDING SCALABLE RESOLUTION IN MULTI-DIMENSIONAL IMAGE DATA

Номер: US20200005452A1
Принадлежит:

An imaging system and method acquire first ultrasound image data of a body at a first acquisition quality level, display one or more two-dimensional images of the body using the image data at the first acquisition quality level, and create second ultrasound image data at a reduced, second acquisition quality level. The second ultrasound image data is created from the first ultrasound image data that was acquired at the first acquisition quality level. The system and method also display a rendered multi-dimensional image of the body using the second ultrasound image data at the reduced, second acquisition quality level. 1. A method comprising:acquiring first ultrasound image data of a body at a first acquisition quality level;displaying one or more two-dimensional images of the body using the image data at the first acquisition quality level;creating second ultrasound image data at a reduced, second acquisition quality level, the second ultrasound image data created from the first ultrasound image data that was acquired at the first acquisition quality level; anddisplaying a rendered multi-dimensional image of the body using the second ultrasound image data at the reduced, second acquisition quality level.2. The method of claim 1 , wherein the first acquisition quality level is a first spatial resolution of the first ultrasound image data and the second acquisition quality level is a reduced claim 1 , second spatial resolution of the first ultrasound image data.3. The method of claim 1 , wherein creating the second ultrasound image data includes reducing a spatial resolution of the first ultrasound image data.4. The method of claim 1 , wherein creating the second ultrasound image data includes downsampling the first ultrasound image data.5. The method of claim 1 , wherein creating the second ultrasound image data includes averaging values of one or more of pixels or voxels in the first ultrasound image data.6. The method of claim 1 , wherein creating the second ...

Подробнее
04-01-2018 дата публикации

METHOD OF HIDING AN OBJECT IN AN IMAGE OR VIDEO AND ASSOCIATED AUGMENTED REALITY PROCESS

Номер: US20180005448A1
Принадлежит:

A method for generating a final image from an initial image including an object suitable to be worn by an individual. The presence of the object in the initial image is detected. A first layer is superposed on the initial image. The first layer includes a mask at least partially covering the object in the initial image. The appearance of at least one part of the mask is modified. The suppression of all or part of an object in an image or a video is enabled. Also, a process of augmented reality intended to be used by an individual wearing a vision device on the face, and a try-on device for a virtual object. 1. A method for generating a final image from an initial image comprising a physical object suitable to be worn by an individual , comprising the steps of:acquiring an image of the individual wearing the physical object on the individual's face, said image being the initial image;detecting a presence of said physical object in the initial image;generating a mask at least partially covering the physical object in the initial image;superposing a first layer on the initial image, the first layer including the mask at least partially covering the physical object in the initial image;generating a texture reproducing element in a background of the physical object to suppress all or part of the image of the physical object in the final image; andmodifying an appearance of at least one part of the mask by applying to the mask the generated texture.2. The image generation method according to claim 1 , wherein the modification of the appearance of the mask comprises a step of substitution of a texture of all or part of the object in the final image.3. (canceled)4. The image generation method according to claim 1 , wherein the mask also covers all or part of a shadow cast by the object.5. The image generation method according to claim 1 , further comprising the steps of superposing a second layer on the initial image over the first layer claim 1 , the second layer including ...

Подробнее
02-01-2020 дата публикации

POINT CLOUD DATA HIERARCHY

Номер: US20200005533A1
Принадлежит: WILLOW GARAGE, INC.

One method embodiment comprises storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin. 1. A method for presenting multi-resolution views of a very large point data set , comprising:a. storing data on a storage system that is representative of a point cloud comprising a very large number of associated points;b. organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution;c. receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; andd. assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin.2. The method of claim 1 , wherein storing comprises accessing a storage cluster.3. The method of claim 1 , further comprising using a network to ...

Подробнее
03-01-2019 дата публикации

METHOD AND DEVICE FOR GENERATING DESKTOP EFFECT AND ELECTRONIC DEVICE

Номер: US20190005708A1
Автор: Shao Wenbin

The present disclosure provides a method and a device for generating a desktop effect and an electronic device. The method includes: obtaining an image captured by a camera; loading the image as a texture map to a quadrilateral object to generate a first object; loading the first object to a container object to generate a second object; adding the second object to a preset service; and replacing a desktop wallpaper service with the preset service, and displaying a desktop effect corresponding to the preset service. 1. A method for generating a desktop effect , comprising:obtaining an image captured by a camera;loading the image as a texture map to a quadrilateral object to generate a first object;loading the first object to a container object to generate a second object;adding the second object to a preset service; andreplacing a desktop wallpaper service with the preset service, and displaying a desktop effect corresponding to the preset service.2. The method according to claim I , further comprising:receiving a triggering operation from a user after displaying the desktop effect corresponding to the preset service; anddisplaying a corresponding animation effect according to the triggering operation.3. The method according to claim 1 , wherein loading the image as a texture map to a quadrilateral object to generate a first object comprises:converting the image into the texture map based on a 3D engine; andloading the texture map to the quadrilateral object via a shader program to generate the first object, wherein the first object is a 3D object.4. The method according to claim 1 , further comprising:judging whether the camera is successfully started before obtaining the image captured by the camera;when the camera is successfully started, obtaining the image captured by the camera;when the camera fails to start, obtaining a default image.5. The method according to claim 1 , further comprising:defining a plurality of attributes of the camera, wherein the attributes ...

Подробнее
09-01-2020 дата публикации

VIRTUAL REALITY IMAGING OF THE BRAIN

Номер: US20200008679A1
Автор: SETTY Yaakov
Принадлежит:

A method for imaging the brain of a living patient includes creating and displaying an unconstrained 3D virtual reality image of the brain of the living patient based on a plurality of 3D images captured by magnetic resonance imaging (MRI). An administration of a drug is simulated into brain tissue. The simulation includes displaying a simulated diffusion of the drug in 3D VR image of the brain; displaying simulated brain tissue uptake of the drug in the 3D VR image of the brain; displaying a simulated stimulation of individual neurons in the 3D VR image of the brain; and analyzing a simulated activity of the individual neurons based on at least one predetermined property of the drug. The method further includes determining a brain treatment protocol based at least in part on the simulated administration of the drug into the brain. 1. A method of imaging a brain of a living patient , comprising: displaying a simulated diffusion of the drug in a three-dimensional virtual reality image of the brain of the living patient;', 'displaying simulated brain tissue uptake of the drug in the three-dimensional virtual reality image of the brain of the living patient;', 'displaying a simulated stimulation of individual neurons in the three-dimensional virtual reality image of the brain of the living patient; and', 'analyzing a simulated activity of the individual neurons based on at least one predetermined property of the drug;, 'simulating an administration of a drug into brain tissue of a living patient, the simulation includingdetermining a brain treatment protocol based at least in part on the simulated administration of the drug into the brain of the living patient.2. The method of claim 1 , wherein the three-dimensional virtual reality image of the brain is created and displayed in real-time.3. The method of claim 1 , wherein the simulated administration of the drug into the brain is intranasal.4. The method of claim 1 , wherein the simulated the administration of the drug ...

Подробнее
12-01-2017 дата публикации

METHODS AND SYSTEMS FOR THREE-DIMENSIONAL VISUALIZATION OF DEVIATION OF VOLUMETRIC STRUCTURES WITH COLORED SURFACE STRUCTURES

Номер: US20170011546A1
Принадлежит:

Embodiments of the present disclosure are directed to methods and computer systems for converting datasets into three-dimensional (“3D”) mesh surface visualization, displaying the mesh surface on a computer display, comparing two three-dimensional mesh surface structures by blending two primary different primary colors to create a secondary color, and computing the distance between two three-dimensional mesh surface structures converted from two closely-matched datasets. For qualitative analysis, the system includes a three-dimensional structure comparison control engine that is configured to convert dataset with three-dimensional structure into three-dimensional surfaces with mesh surface visualization. The control engine is also configured to assign color and translucency value to the three-dimensional surface for the user to do qualitative comparison analysis. For quantitative analysis, the control engine is configured to compute the distance field between two closely-matched datasets. 1. A computer-implemented method for blending two three-dimensional volumetric structures , comprising:associating a first primary color and a first translucency to a first dataset, and a second primary color and a second translucency to a second dataset, each of the first dataset and the second dataset having a respective three-dimensional surface representation; andblending the first and second datasets by visualizing the translucent surfaces of the first and second datasets to produce a blended representation of the first and second translucent surface datasets simultaneously, the blended representation including a first primary color, a second primary color, and a secondary color.2. The method of claim 1 , after the blending step claim 1 , further comprising transforming the first and second three-dimensional datasets claim 1 , the first and second three-dimensional datasets having a geometric relationship.3. The method of claim 1 , wherein the three-dimensional surface ...

Подробнее
11-01-2018 дата публикации

METHOD FOR DEPICTING AN OBJECT

Номер: US20180012394A1

The invention relates to technologies for visualizing a three-dimensional (3D) image. According to the claimed method, a 3D model is generated, images of an object are produced, a 3D model is visualized, the 3D model together with a reference pattern and also coordinates of texturing portions corresponding to polygons of the 3D model are stored in a depiction device, at least one frame of the image of the object is produced, the object in the frame is identified on the basis of the reference pattern, a matrix of conversion of photo image coordinates into dedicated coordinates is generated, elements of the 3D model are coloured in the colours of the corresponding elements of the image by generating a texture of the image sensing area using the coordinate conversion matrix and data interpolation, with subsequent designation of the texture of the 3D model. 116-. (canceled)17. A method of displaying a virtual object on a computing device , comprising a memory , a camera , and a display , the memory being adapted to store at least one reference image and at least one 3D model , wherein each reference image being associated with one 3D model , the method comprising:acquiring an image from the camera,recognizing the virtual object on the acquired image based upon a reference image,forming a 3D model associated with the reference image,forming a transformation matrix for juxtaposing coordinates of the acquired image with coordinates of the 3D model;juxtaposing coordinates of texturized sections of the acquired image to corresponding sections of the 3D model;painting the sections of the 3D model using colors and textures of the corresponding sections of the acquired image, anddisplaying the 3D model over a video stream using augmented reality tools and/or computer vision algorithms.18. The method of claim 17 , wherein the 3D model is represented by polygons.19. The method of claim 18 , wherein the transformation matrix is adapted to juxtapose coordinates of the texturized ...

Подробнее
14-01-2021 дата публикации

DEEP NOVEL VIEW AND LIGHTING SYNTHESIS FROM SPARSE IMAGES

Номер: US20210012561A1
Принадлежит:

Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output. 1. A computer-implemented method for generating a novel image of an object , the method comprising:generating, by at least one processor with a first portion of a neural network, a plurality of feature maps based on a plurality of images, wherein each image of the plurality of images corresponds to one of a plurality of viewpoints and one of a plurality of lighting directions;generating, by the at least one processor with a second portion of the neural network, a sweeping plane volume and a cost volume, wherein the sweeping plane volume is generated based on a size of the object, a plurality of distances from the object to the plurality of viewpoints, and a number of depth planes in a plurality of depth planes;generating, by the at least one processor with a third portion of the neural network, a relit image for each depth plane of the plurality of depth planes based on the sweeping plane volume and a novel lighting direction, wherein each relit image includes a corresponding plurality of pixels;calculating, by the at least one processor with a ...

Подробнее
14-01-2021 дата публикации

PROBE-BASED DYNAMIC GLOBAL ILLUMINATION

Номер: US20210012562A1
Принадлежит:

Global illumination in computer graphics refers to the modeling of how light is bounced off of one or more surfaces in a computer generated image onto other surfaces in the image (i.e. indirect light), rather than simply determining the light that hits a surface in an image directly from a light source (i.e. direct light). Rendering accurate global illumination effects in such images makes them more believable. However, simulating physically-based global illumination with offline numerical solvers has traditionally been time consuming and/or noisy and has not adapted well for dynamic scenes. The present disclosure provides a probe-based dynamic global illumination technique for computer generated scenes. 1. A method , comprising: computing, for the irradiance field probe, a diffuse irradiance and statistics of a distance distribution, and', 'encoding the irradiance field probe with the diffuse irradiance and the statistics of the distance distribution., 'computing an irradiance field probe of a plurality of irradiance field probes placed in a volume of a scene by2. The method of claim 1 , wherein the method is performed at scene initialization.3. The method of claim 1 , wherein the irradiance field probe stores information about a point in the scene.4. The method of claim 1 , wherein at least a subset of the plurality of irradiance field probes are volumes of different resolutions.5. The method of claim 1 , wherein encoding the irradiance field probe includes packing the diffuse irradiance and the statistics of the distance distribution as square probe textures into a single two-dimensional (2D) texture atlas with duplicated gutter regions.6. The method of claim 1 , wherein the irradiance field probe is encoded by applying a perception-based exponential encoding to probe irradiance values.7. The method of claim 1 , further comprising:updating one or more irradiance field probes of the plurality of irradiance field probes based on changes to the scene.8. The method ...

Подробнее
19-01-2017 дата публикации

3D DIGITAL PAINTING

Номер: US20170018112A1
Автор: Vaganov Vladimir
Принадлежит:

A method of digital continuous and simultaneous three-dimensional painting and three-dimensional drawing with steps of providing a digital electronic canvas having at least one display and capable of presenting two pictures for a right eye and a left eye; providing means for continuous changing of a virtual distance between the digital electronic canvas and a painter by digitally changing a horizontal disparity (shifting) between images for the right eye and the left eye on the digital electronic canvas corresponding to instant virtual canvas position; providing at least one multi-axis input control device allowing digital painting or drawing on the digital electronic canvas; painting on the digital electronic canvas for any instant virtual positions of the digital electronic canvas providing simultaneous appearance of a similar stroke on the images for the right eye and the left eye. 1. A method of digital continuous and simultaneous three-dimensional painting and three-dimensional drawing , (and three-dimensional cursor navigating) said method comprising:providing a digital electronic canvas having a screen or display configured for presenting two pictures for a right eye and a left eye;providing means for three-dimensional digital vision;providing means for three-dimensional image presentation comprising a processor;providing means for continuous changing of a virtual distance between the digital electronic canvas and a painter by digitally changing a horizontal (shifting) disparity between images for the right eye and the left eye on the digital electronic canvas corresponding to instant virtual canvas position; wherein a resolution Δ of continuity of changing of the virtual distance Z between the digital electronic canvas and the painter is defined by a size p of a pixel on the digital electronic canvas in horizontal direction and by a distance d between pupils of a painter's eyes according to an expression: Δ≈2p Z/d;providing at least one input control device ...

Подробнее
03-02-2022 дата публикации

IMAGE DISPLAY METHOD, IMAGE DISPLAY APPARATUS, AND STORAGE MEDIUM STORING DISPLAY CONTROL PROGRAM

Номер: US20220036862A1
Автор: Yamada Yusuke
Принадлежит:

An image display method includes: displaying a first image having a first image surface on a display surface in a three-dimensional fashion; in response to a reception of an instruction of rotating the first image around an axis different from any axis in the display surface, rotating the first image around a first imaginary axis, the first imaginary axis being vertical to the first image surface and different from an axis vertical to the display surface; and displaying the rotated first image. 1. An image display method comprising:displaying a first image having a first image surface on a display surface in a three-dimensional fashion;in response to a reception of an instruction of rotating the first image around an axis different from any axis in the display surface, rotating the first image around a first imaginary axis, the first imaginary axis being vertical to the first image surface and different from an axis vertical to the display surface; anddisplaying the rotated first image.2. The image display method according to claim 1 , further comprising:displaying, in a two-dimensional fashion, an enlarged image related to the first image displayed in the three-dimensional fashion;in response to the reception of the instruction, rotating the enlarged image around the axis vertical to the display surface; anddisplaying the rotated, enlarged image.3. The image display method according to claim 2 , whereinin response to the reception of the instruction, the first image and the enlarged image are rotated in conjunction with each other.4. The image display method according to claim 1 , further comprising:displaying a second image having a second image surface on the display surface in the three-dimensional fashion;in response to the reception of the instruction, rotating the second image around a second imaginary axis, the second imaginary axis being vertical to the second image surface and different from the axis vertical to the display surface; anddisplaying the ...

Подробнее
18-01-2018 дата публикации

Techniques for Built Environment Representations

Номер: US20180018502A1
Принадлежит:

Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment. 1. A system for indoor mapping and navigation comprises:a reference mobile device including sensors to capture range, depth and position data with the mobile device including a depth perception unit a position estimator, a heading estimator, and an inertial measurement unit to process data received by the sensors from an environment, the reference mobile device further including a processor configured to:process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit;execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping; andintegrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment.2. The system of claim 1 , wherein the 2D or 3D object recognition technique is part of the 3D mapping process.3. The system of claim 1 , wherein reference device models are images or 3D data models or Building information modelling (BIM) data.4. The system of wherein the processor is further configured to:load RGB/RGB-D (three color+one depth) image/point cloud data set of a scene;choose interest points;compute scene ...

Подробнее
18-01-2018 дата публикации

GENERATING PREVIEWS OF 3D OBJECTS

Номер: US20180018810A1
Автор: Morovic Jan, Morovic Peter

Methods and apparatus relating to previews of objects are described. In one example, control data for generating a three-dimensional object specifying, for voxels of the object, at least one print material to be deposited in that voxel during object generation is obtained. A viewing frustum may be determined, and visible voxels within the viewing frustum identified. A number of preview display pixels to display the preview may be determined and the set of voxels to be represented by each preview display pixel identified. At least one voxel appearance parameter of a voxel may be determined from the control data. For each preview display pixel, at least one pixel appearance parameter may be determined by combining voxel appearance parameters for the set of voxels to be represented by that preview display pixel. Preview display pixels may be controlled to display a preview of the object according to the at least one pixel appearance parameter. 1. A method comprising:obtaining control data for generating a three-dimensional object, the control data specifying, for a plurality of voxels of the object, at least one print material to be deposited in that voxel during object generation;determining a viewing frustum;determining voxels visible within the viewing frustum;determining a number of preview display pixels to display a preview of the object;identifying a set of voxels to be represented by each preview display pixel;determining at least one appearance parameter of a voxel from the control data;determining, for each preview display pixel, at least one pixel appearance parameter by combining appearance parameters for the set of voxels to be represented by that preview display pixel;controlling the preview display pixels to display a preview of the object according to the at least one pixel appearance parameter.2. A method according to in which determining the visible voxels comprises determining a transparency of a first visible voxel and further determining claim 1 , ...

Подробнее
17-01-2019 дата публикации

SYSTEMS AND METHODS FOR CREATING AND DISPLAYING INTERACTIVE 3D REPRESENTATIONS OF REAL OBJECTS

Номер: US20190019327A1
Автор: Popov Konstantin S.
Принадлежит:

Systems and methods are disclosed for generating a 3D view of an object. At least a 360 degree view of an object is recorded by rotating the object or moving a camera around an object. The data can be used to generate a 3D view that allows users to rotate an item to see the corresponding images. 1. A method for generating a 3D view of an object , the method comprising:capturing image data from a plurality of viewpoints around an object;analyzing the image data for quality;creating a dataset of images based on the image data;filtering the dataset of images;generating data reference parameters; anduploading the dataset of images through a network to a server.2. The method of claim 1 , wherein the image data includes a video.3. The method of claim 1 , wherein the image data includes a plurality of pictures.4. The method of claim 1 , wherein capturing the image data includes rotating the object while capturing the image data using a stationary camera.5. The method of claim 1 , wherein capturing the image data includes moving a camera in an orbit around the object.6. The method of claim 1 , wherein analyzing the image data for quality includes detecting blurriness or artifacts in images included in the image data to identify low quality images.7. The method of claim 6 , further comprising excluding the low quality images from the dataset of images.8. The method of claim 1 , further comprising compensating for non-constant relative rotation of the object and a camera capturing the image data.9. The method of claim 1 , further comprising normalizing a scaling of the object in the image data by resizing at least one image.10. The method of claim 1 , further comprising creating a zoom image dataset including images that are higher resolution version of images included in the dataset of images.11. The method of claim 1 , wherein the dataset of images includes:a plurality of images of the object from different viewpoints around the object; andfor each of the plurality of ...

Подробнее
16-01-2020 дата публикации

SURFACE PATCH TECHNIQUES FOR COMPUTATIONAL GEOMETRY

Номер: US20200019651A1
Автор: Rockwood Alyn P.
Принадлежит:

A method and system for computer aided design (CAD) is disclosed for designing geometric objects, wherein interpolation and/or blending between such objects is performed while deformation data is being input. Thus, a designer obtains immediate feedback to input modifications without separately entering a command(s) for performing such deformations. A novel N-sided surface generation technique is also disclosed herein to efficiently and accurately convert surfaces of high polynomial degree into a collection of lower degree surfaces. E.g., the N-sided surface generation technique disclosed herein subdivides parameter space objects (e.g., polygons) of seven or more sides into a collection of subpolygons, wherein each subpolygon has a reduced number of sides. More particularly, each subpolygon has 3 or 4 sides. The present disclosure is particularly useful for designing the shape of surfaces. Thus, the present disclosure is applicable to various design domains such as the design of, e.g., bottles, vehicles, and watercraft. Additionally, the present disclosure provides for efficient animation via repeatedly modifying surfaces of an animated object such as a representation of a face. 1obtaining a plurality of object space boundary curves defining the geometric surface patch, wherein each of the boundary curves is represented as a weighted sum, such that:{'sub': i', 'i', 'i, '(a) a first plurality of terms of the weighted sum, wherein each term includes a product of: (i) a point Lon a first tangent to the boundary curve at a first end point of the boundary curve, and (ii) a corresponding weighting for L, wherein the point Land its corresponding weighting is determined according to a function that is of degree one in its domain space and monotonically varies between a predetermined first value and a predetermined second value, the first value less than the second value;'}{'sub': i', 'i', 'i, '(b) a second plurality of terms of the weighted sum, wherein each term of the ...

Подробнее
22-01-2015 дата публикации

TRIANGLE RASTERIZATION

Номер: US20150022525A1
Принадлежит:

Techniques are disclosed for deriving a list of pixels contained within a projected triangle in a way that is computationally efficient. In particular, the recursive techniques disclosed herein are particularly well-suited for implementation on modern multi-processor computer systems, and enable a list of pixels contained within a projected triangle to be derived quickly and efficiently. For example, in certain embodiments a network of projected triangles is overlaid by a plurality of tiles, which are subsequently divided into an array of sub-tiles, each of which can be processed in parallel by a multi-processor computer system. This recursive process advantageously allows three-dimensional objects to be rendered in a computationally efficient manner. 1. A computer-implemented rasterization method comprising:defining a plurality of projected triangles in a two-dimensional image space, the two-dimensional image space corresponding to a pixel array;dividing the two-dimensional image space into a plurality of tiles, wherein the projected triangles overlap at least a portion of the tiles;processing a first selected tile so as to generate a first list of pixels contained within a first projected triangle overlapping the first selected tile, wherein such processing comprises subdividing the first selected tile into a first plurality of sub-tiles which are classified with respect to at least one edge of the first projected triangle; andprocessing a second selected tile so as to generate a second list of pixels contained within a second projected triangle overlapping the second selected tile, wherein such processing comprises subdividing the second selected tile into a second plurality of sub-tiles which are classified with respect to at least one edge of the second projected triangle;wherein the processing of the first and second selected tiles is performed at least partially simultaneously using respective first and second processor cores of a processor array comprising a ...

Подробнее
16-01-2020 дата публикации

Scalable Parallel Tessellation

Номер: US20200020156A1
Автор: Howson John W.
Принадлежит:

Methods and tessellation modules for tessellating a patch to generate tessellated geometry data representing the tessellated patch. Received geometry data representing a patch is processed to identify tessellation factors of the patch. Based on the identified tessellation factors of the patch, tessellation instances to be used in tessellating the patch are determined. The tessellation instances are allocated amongst a plurality of tessellation pipelines that operate in parallel, wherein a respective set of one or more of the tessellation instances is allocated to each of the tessellation pipelines, and wherein each of the tessellation pipelines generates tessellated geometry data associated with the respective allocated set of one or more of the tessellation instances. 1. A method of tessellating a patch to generate tessellated geometry data representing the tessellated patch , the method comprising:processing received geometry data representing a patch to identify tessellation factors of the patch;determining, based on the identified tessellation factors of the patch, tessellation instances to be used in tessellating the patch wherein each of the tessellation instances, determined for the patch, is associated with a portion of tessellated geometry that will be generated when the patch is tessellated so that the tessellated geometry associated with all of the tessellation instances for the patch collectively define the tessellated geometry data for the patch; andallocating the tessellation instances amongst a plurality of tessellation pipelines that operate in parallel, wherein a respective set of one or more of the tessellation instances is allocated to each of the tessellation pipelines, and wherein each of the tessellation pipelines generates the tessellated geometry data associated with the respective allocated set of one or more of the tessellation instances.2. The method of claim 1 , wherein said determining tessellation instances to be used in tessellating ...

Подробнее
28-01-2016 дата публикации

CLOUD BASED OPERATING SYSTEM AND BROWSER WITH CUBE INTERFACE

Номер: US20160026359A1
Принадлежит:

A user interface and or browser is provided having a tilted cube hexagonal structure. The cube is rotatable and has advertisements and theme based window panes on the back and the front of the panes. The cube is rotated in such a fashion that the tilted orientation is maintained constant but the panes revolved about a tilted axis much as the Earth revolves about its axis. Computer software coordinates the responses of keyboard, mouse and other devices as they interact with the cube itself so as to provide a stimulating 3D graphical user interface. The system is applicable to mobile devices such as smart phones and similar devices. 1. A three dimensional browser interface comprising:a rotatable object having 'a category theme.', 'a pane with'}2. The three dimensional browser interface of claim 1 , further comprising:a link within the pane.3. The three dimensional browser interface of claim 1 , further comprising:an icon within the pane.4. The three dimensional browser interface of claim 3 , further comprising:an interactive link associated with the icon.5. The three dimensional browser interface of claim 1 , wherein the rotatable object is a polygon.6. The three dimensional browser interface of claim 1 , further comprising:a plurality of panes connected together wherein each pane is connected to only 2 adjacent panes along edges thereof in a regular polygon structure.7. A method of creating a graphical user interface GUI comprising the steps of:creating a 3D model of the GUIgenerating a 3D animation layout of the GUIcreating a 3D rendering of the GUI andcombining the rendering of the GUI with a user interface routine.8. The method of creating a graphical user interface GUI of claim 7 , wherein the user interface routine is a mouse interaction routine.9. The method of creating a graphical user interface GUI of claim 7 , wherein the user interface routine is a keyboard interaction routine.10. The method of creating a graphical user interface GUI of claim 7 , wherein ...

Подробнее
28-01-2016 дата публикации

REAL-TIME IMMERSIVE MEDIATED REALITY EXPERIENCES

Номер: US20160027209A1
Автор: Dalke George, Demirli Oya
Принадлежит:

The invention relates to creating real-time, immersive mediated reality environments using real data collected from a physical event or venue. The invention provides a virtual participant with the ability to control their viewpoint and freely explore the venue, in real time by synthesizing virtual data corresponding to a requested virtual viewpoint using real images obtained from data collectors or sources at the venue. By tracking and correlating real and virtual viewpoints of virtual participants, physical objects, and data sources, systems and methods of the invention can create photo-realistic images for perspective views for which there is not physically present data source. Systems and methods of the invention also relate to applying effect objects to enhance the immersive experience including virtual guides, docents, text or audio information, expressive auras, tracking effects, and audio. 1. A system for creating a mediated reality environment , said system comprising a server computing system comprising a processor coupled to a tangible , non-transitory memory , the system operable to:receive, in real-time, real viewpoint information for one or more data collectors located at a venue;receive a virtual viewpoint from a computing device of a virtual participant, said computing device comprising a processor coupled to a tangible, non-transitory memory;receive one or more real-time images from the one or more data collectors where the one or more data collectors have a real viewpoint which intersects the virtual viewpoint, said one or more real-time images comprising a plurality of real pixels;create, using the server's processor, a real-time virtual image comprising a plurality of virtual pixels and corresponding to the virtual viewpoint by using pixel information from the one or more real-time images; andcause the computing device of the virtual participant to display the real-time virtual image.2. The system of claim 1 , further operable to:identify, using ...

Подробнее
25-01-2018 дата публикации

System and method for geometric warping correction in projection mapping

Номер: US20180025530A1
Принадлежит: Christie Digital Systems USA Inc

A system and method for geometric warping correction in projection mapping is provided. A lower resolution mesh is applied to A mesh model, at least in a region of the mesh model misaligned with a corresponding region of a real-world object. One or more points of the lower resolution mesh are moved. In response, one or more corresponding points of the mesh model are moved to increase alignment between the region of the mesh model and the corresponding region of the real-world object. An updated mesh model is stored in a memory. And one or more projectors are controlled to projection map images corresponding to the updated mesh model onto the real-world object.

Подробнее
10-02-2022 дата публикации

METHOD AND SYSTEM FOR PROVIDING AT LEAST A PORTION OF CONTENT HAVING SIX DEGREES OF FREEDOM MOTION

Номер: US20220044351A1
Принадлежит: KAGENOVA LIMITED

The present invention provides a method for providing with at least a portion of content having six degrees of freedom in a virtual environment, comprising: receiving the portion of content for the virtual environment; associating at least one of a first geometric shape and a second geometric shape with the portion of content; projecting the portion of content onto a first point of a surface of the first geometric shape; determining, based on the projecting of the portion of content onto the first point, a first outcome relating to the portion of content at the first position; projecting the portion of content onto a second point of the surface of the first geometric shape or of a surface of the second geometric shape, the second point being different than the first point; determining, based on the projecting of the portion of content onto the second point, a second outcome relating to the portion of content at a second position in the virtual environment, the second position being different than the first position; and reformatting, based on the first outcome and the second outcome, the portion of content to have six degrees of freedom providing rotational motion and positional motion in the virtual environment. 1. A method for providing with at least a portion of content having six degrees-of-freedom in a virtual environment , comprising:a) receiving the portion of content for the virtual environment;b) associating at least one of a first geometric shape and a second geometric shape with the portion of content;c) projecting the portion of content onto a first point of a surface of the first geometric shape;d) based on the output of step c), determining a first outcome relating to the portion of content at the first position; the surface of the first geometric shape or', 'a surface of the second geometric shape,, 'e) projecting the portion of content onto a second point located onthe second point being different than the first point;f) based on the output of step e ...

Подробнее
24-04-2014 дата публикации

Viewing Three Dimensional Digital Slides

Номер: US20140111509A1
Автор: Ole Eichhorn
Принадлежит: Leica Biosystems Imaging Inc

Systems and methods for providing a view of a digital slide image. In an embodiment, a digital slide image file is accessed. The digital slide image file may comprise a plurality of first image planes representing an image of at least a portion of a slide specimen at varying focal depths. Then, a three-dimensional object is constructed from the digital slide image file. The three-dimensional image object comprises a plurality of second image planes that are derived from one or more of the first image planes and may comprise at least one image plane that has been interpolated from one or more of the first image planes. In addition, a two-dimensional and/or three-dimensional view of the three-dimensional object may be generated.

Подробнее
28-01-2016 дата публикации

METHOD AND APPARATUS FOR CONVERTING TWO-DIMENSIONAL VIDEO CONTENT FOR INSERTION INTO THREE-DIMENSIONAL VIDEO CONTENT

Номер: US20160029003A1
Автор: Luthra Ajay K.
Принадлежит:

A method includes receiving a first three-dimensional video content, determining a three-dimensional format of the first three-dimensional video content, and converting a two-dimensional video content into a second three-dimensional video content based on the determined three-dimensional format, wherein converting the two-dimensional video content comprises decimating the two-dimensional video content. The method further includes splicing the second three-dimensional video content into the first three-dimensional video content. 1. A method comprising:receiving a first three-dimensional video content;determining, using a processor, a three-dimensional format of the first three-dimensional video content;converting a two-dimensional video content into a second three-dimensional video content based on the determined three-dimensional format, wherein converting the two-dimensional video content comprises decimating the two-dimensional video content; andsplicing the second three-dimensional video content into the first three-dimensional video content.2. The method of claim 1 , wherein the three-dimensional format of the first three-dimensional video content comprises at least one of vertical 3D format claim 1 , horizontal 3D format claim 1 , or quincunx 3D format.3. The method of claim 1 , wherein the second three-dimensional video content is in the determined three-dimensional format.4. The method of claim 1 , wherein converting the two dimensional video content further comprises copying the decimated two-dimensional video content.5. The method of claim 1 , wherein splicing the second three-dimensional video content into the first three-dimensional video content comprises:identifying at least one location within the first three-dimensional video content; andsplicing the second three-dimensional video content into the first three-dimensional video content at the at least one location within the first three-dimensional video content.6. The method of claim 1 , further ...

Подробнее
28-01-2016 дата публикации

MOBILE TERMINAL AND METHOD FOR CONTROLLING THE SAME

Номер: US20160029007A1
Принадлежит:

The present disclosure relates to a mobile terminal capable of capturing an image through an array camera, and the mobile terminal may include a display unit, a camera arranged with a plurality of lenses along a plurality of lines to capture a plurality of images, and a controller configured to acquire a plurality of three-dimensional information on different faces of a subject using a plurality of image groups consisting of the plurality of images, and generate information on a stereoscopic shape associated with the subject using the plurality of three-dimensional information. 1. A mobile terminal , comprising:a display;a camera having a plurality of lenses to capture a plurality of images, the plurality of lens arranged in a matrix form; anda controller to obtain a plurality of three-dimensional (3D) information on different views of an object using a plurality of image groups consisting of the plurality of images, and the controller to provide stereoscopic shape information associated with the object using the plurality of 3D information.2. The mobile terminal of claim 1 , wherein the controller to connect the mobile terminal to a 3D printer claim 1 , and the controller to transmit claim 1 , to the 3D printer claim 1 , the stereoscopic shape information claim 1 , and the controller to control the 3D printer to output a stereoscopic shape associated with the object.3. The mobile terminal of claim 1 , wherein the controller analyzes a flexion of one view of the object from images captured of the one view of the object claim 1 , and the controller to determine a scheme of capturing the plurality of images based on a result of the analysis.4. The mobile terminal of claim 3 , wherein the controller separates one view of the object into a plurality of regions according to an extent of flexion of one view of the object as a result of the analysis claim 3 , and the controller to change the scheme of capturing the plurality of images for each of the separated regions.5. ...

Подробнее
24-01-2019 дата публикации

REAL-TIME IMMERSIVE MEDIATED REALITY EXPERIENCES

Номер: US20190026945A1
Автор: Dalke George, Demirli Oya
Принадлежит:

The invention relates to creating real-time, immersive mediated reality environments using real data collected from a physical event or venue. The invention provides a virtual participant with the ability to control their viewpoint and freely explore the venue, in real time by synthesizing virtual data corresponding to a requested virtual viewpoint using real images obtained from data collectors or sources at the venue. By tracking and correlating real and virtual viewpoints of virtual participants, physical objects, and data sources, systems and methods of the invention can create photo-realistic images for perspective views for which there is not physically present data source. Systems and methods of the invention also relate to applying effect objects to enhance the immersive experience including virtual guides, docents, text or audio information, expressive auras, tracking effects, and audio. 1. A system for creating a mediated reality environment , said system comprising a server computing system comprising a processor coupled to a tangible , non-transitory memory , the system operable to:receive, in real-time, real viewpoint information for one or more data collectors located at a venue;receive a virtual viewpoint from a computing device of a virtual participant, said computing device comprising a processor coupled to a tangible, non-transitory memory;receive one or more real-time images from the one or more data collectors where the one or more data collectors have a real viewpoint which intersects the virtual viewpoint, said one or more real-time images comprising a plurality of real pixels;create, using the server's processor, a real-time virtual image comprising a plurality of virtual pixels and corresponding to the virtual viewpoint by using pixel information from the one or more real-time images; andcause the computing device of the virtual participant to display the real-time virtual image.2. The system of claim 1 , further operable to:identify, using ...

Подробнее
23-01-2020 дата публикации

THREE-DIMENSIONAL SHAPE EXPRESSION METHOD AND DEVICE THEREOF

Номер: US20200027215A1
Принадлежит:

The present disclosure provides a three-dimensional shape expression method and device thereof. The method includes following steps: extracting a hybrid type framework of a three-dimensional shape; obtaining a segmentation of the three-dimensional shape by segmenting the hybrid type framework; obtaining a sub-structure of the three-dimensional shape according to the segmentation of the three-dimensional shape; and establishing an expression of the three-dimensional shape by using a bag-of-words model according to the sub-structure of the three-dimensional shape. The embodiments of the present disclosure are capable to express a three-dimensional shape with an easy and high-efficient method. 1. A three-dimensional shape expression method , comprising following steps:extracting a hybrid type framework of a three-dimensional shape;obtaining a segmentation of the three-dimensional shape by segmenting the hybrid type framework;obtaining a sub-structure of the three-dimensional shape according to the segmentation of the three-dimensional shape; andestablishing an expression of the three-dimensional shape by using a bag-of-words model according to the sub-structure of the three-dimensional shape.2. The three-dimensional shape expression method according to claim 1 , wherein the step of extracting the hybrid type framework of the three-dimensional shape comprises:obtaining sampling points by sampling surfaces of the three-dimensional shape; andre-expressing the sampling points to obtain the hybrid type framework comprising a one-dimensional curve and a two-dimensional slice.3. The three-dimensional shape expression method according to claim 2 , wherein the step of obtaining the segmentation of the three-dimensional shape by segmenting the hybrid type framework comprises:segmenting the hybrid type framework; andobtaining the segmentation of the three-dimensional shape by segmenting the hybrid type framework, according to corresponding relationships between the hybrid type ...

Подробнее
23-01-2020 дата публикации

SYSTEM AND METHOD FOR MAPPING

Номер: US20200027274A1
Принадлежит: Magic Leap, Inc.

A computer implemented method for updating a point map on a system having first and second communicatively coupled hardware components includes the first component performing a first process on the point map in a first state to generate a first change. The method also includes the second component performing a second process on the point map in the first state to generate a second change. The method further includes the second component applying the second change to the point map in the first state to generate a first updated point map in a second state. Moreover, the method includes the first component sending the first change to the second component. In addition, the method includes the second component applying the first change to the first updated point map in the second state to generate a second updated point map in a third state. 1. A computer implemented method for updating a point map on a system having first and second communicatively coupled hardware components , comprising:the first component performing a first process on the point map in a first state to generate a first change;the second component performing a second process on the point map in the first state to generate a second change;the second component applying the second change to the point map in the first state to generate a first updated point map in a second state;the first component sending the first change to the second component; andthe second component applying the first change to the first updated point map in the second state to generate a second updated point map in a third state.2. The method of claim 1 , further comprising:the second component sending the second change to the first component;the first component applying the second change to the point map in the first state to generate the first updated point map in the second state; andthe first component applying the first change to the first updated point map in the second state to generate the second updated point map in the ...

Подробнее
28-01-2021 дата публикации

PACK TILE

Номер: US20210027521A1
Автор: Kato Toshiaki
Принадлежит: DreamWorks Animation LLC

A method of facilitating an interactive rendering of a computer image at a remote computer includes: at a first time, obtaining first information of the image, including pixel information of the image at the first time; and, at a second time after the first time, obtaining second information of the image including pixel information of the image at the second time. Delta pixel information is generated by comparing the pixel information of the first information with the pixel information of the second information, to include one or more portions of the pixel information of the second information updated since the first information was obtained, and to exclude one or more portions of the pixel information of the second information unchanged since the first information was obtained. The method further includes: transmitting the delta pixel information in a lossless format to a front-end client to enable reconstruction of the second information. 1. A method of facilitating an interactive rendering of a computer image at a remote computer , the method comprising:at a first time, obtaining first information of the computer image, the first information comprising pixel information of the computer image at the first time;at a second time after the first time, obtaining second information of the computer image, the second information comprising pixel information of the computer image at the second time;generating delta pixel information by comparing the pixel information of the first information with the pixel information of the second information,wherein the delta pixel information is generated to include one or more portions of the pixel information of the second information that are updated since the first information was obtained, andwherein the delta pixel information is generated to exclude one or more portions of the pixel information of the second information that are unchanged since the first information was obtained; andtransmitting the delta pixel information in a ...

Подробнее
04-02-2016 дата публикации

THREE DIMENSIONAL RENDERING OF JOB SITE

Номер: US20160031681A1
Автор: Delplace Jean-Charles
Принадлежит:

Methods and systems are disclosed for rendering a job site in a three dimensional simulation. A stream of input data is received at a processor about a job site wherein the data pertains to movements and lifts of at least one lifting device associated with the job site and at least one partially constructed building associated with the job site. A three dimensional (3D) simulation is generated, at the processor, of the at least one lifting device and the at least one partially constructed building. The 3D simulation is updated in real time, at the processor, to simulate movements of the at least one lifting and the at least one partially constructed building. The 3D simulation is sent from the processor to a display. 1. A method for rendering a job site in a three dimensional simulation , said method comprising:receiving a stream of input data, at a processor, about a job site wherein said data pertains to movements and lifts of at least one lifting device associated with said job site and at least one partially constructed building associated with said job site;generating a three dimensional (3D) simulation, at said processor, of said at least one lifting device and said at least one partially constructed building;updating said 3D simulation in real time, at said processor, to simulate movements of said at least one lifting device and said at least one partially constructed building; andsending said 3D simulation from said processor to a display.2. The method as recited in wherein said 3D simulation comprises an alert which notifies of potential collisions between objects associated with said job site.3. The method as recited in wherein said alert is a shaded region of said 3D simulation which highlights said potential collisions.4. The method as recited in wherein said alert comprises an audible sound generated by a speaker associated with said display.5. The method as recited in wherein said 3D simulation comprises multiple points of view and wherein said ...

Подробнее
04-02-2016 дата публикации

System and method for 3d content creation

Номер: US20160035126A1
Автор: Sy Sen TANG
Принадлежит: Individual

A system for 3D content creation includes: a theme preparation module configured to prepare a 3D theme for a user; and a 3D rendering module configured to load the 3D theme and apply the 3D theme to 2D content provided by the user. The theme preparation module includes: means for preparing different rendering passes; means for packing the rendering passes to a 3D theme; and means for inputting the 3D theme into the system. The 3D rendering module includes: means for selecting a 3D theme; means for customizing the 3D theme with the 2D content; means for rendering the 2D content with the 3D theme into a plurality of frames in a 3D format; and means for combining the frames into a 3D video and outputting the 3D video. A method for 3D content creation is also provided.

Подробнее
04-02-2016 дата публикации

THREE-DIMENSIONAL IMAGE DISPLAY SYSTEM, SERVER FOR THREE-DIMENSIONAL IMAGE DISPLAY SYSTEM, AND THREE-DIMENSIONAL IMAGE DISPLAY METHOD

Номер: US20160035127A1
Автор: Ishibashi Yudai
Принадлежит:

Three-dimensional image display system includes server and client cooperating with server to display three-dimensional image. Server includes server memory storing vertex information indicating a vertex position of a polygon that forms a three-dimensional shape, valid polygon identification unit converting the vertex information into different coordinate systems, determining whether the polygon is a valid polygon for displaying based on the converted vertex information and viewpoint information transmitted from client, and generating valid vertex information indicating whether the vertex is valid for rendering the valid polygon, and server communicator transmitting valid vertex information to client. Client includes client memory storing the vertex information indicating the vertex position of the polygon that forms the three-dimensional shape, client communicator transmitting viewpoint information and receiving valid vertex information, and a polygon rendering unit reading the vertex information for only the valid vertex in valid vertex information, converting into different coordinate systems, and generating three-dimensional image. 1. A three-dimensional image display system comprising:a server; anda client cooperating with the server to display a three-dimensional image,the server comprising:a server memory configured to store vertex information indicating a vertex position of a polygon that forms a three-dimensional shape;a valid polygon identification unit configured to convert the vertex information stored in the server memory into a different coordinate system, to decide whether the polygon is a valid polygon for displaying based on the converted vertex information and viewpoint information transmitted from the client, and to generate valid vertex information indicating whether the vertex is valid for rendering the valid polygon; anda server communicator configured to transmit the valid vertex information to the client,the client comprising:a client memory ...

Подробнее
05-02-2015 дата публикации

METHOD FOR PROCESSING A CURRENT IMAGE OF AN IMAGE SEQUENCE, AND CORRESPONDING COMPUTER PROGRAM AND PROCESSING DEVICE

Номер: US20150035828A1
Принадлежит:

A method for processing a current image of an image sequence is disclosed. According to the invention, the method includes: 2. The method for processing a current image of an image sequence according to claim 1 , wherein said method further comprises a construction and restitution of a confidence image representative of said at least one confidence indicator.3. The method for processing a current image of an image sequence according to claim 1 , wherein said method further comprises a restitution of an image to be post-processed claim 1 , said image to be post-processed being obtained by combining said image to be processed and said confidence image.4. The method for processing a current image of an image sequence according to according to claim 1 , wherein said identification of said at least one unknown region implements:a separation of said current image into at least one foreground and one background, and/ora changing viewpoint of a device for capturing said current image.5. The method for processing a current image of an image sequence according to claim 1 , wherein said first claim 1 , second claim 1 , third and fourth values are weighted according to whether said pixel belongs to a region of interest and/or the salience of said pixel.6. The method for processing a current image of an image sequence according to wherein said construction of a confidence image comprises:initialisation of said confidence image,association of a constant colour and/or pattern with each pixel of said known region, delivering colour pixels associated with said known region,for each pixel to be constructed of said at least one unknown region, called unknown pixel, association of a colour and/or pattern with the confidence indicator obtained for said unknown pixel, delivering a colour pixel associated with the unknown region, said colour being distinct from said constant colour,formation of said confidence image comprising said colour pixels and/or patterns associated respectively ...

Подробнее
05-02-2015 дата публикации

Method for real-time and realistic rendering of complex scenes on internet

Номер: US20150035830A1
Принадлежит: Shenyang Institute of Automation of CAS

A method for realistic and real-time rendering of complex scene in internet environment, comprising: generating sequences of scene-object-multi-resolution models, a scene configuration file, textures and material files, and a scene data list file; compressing the sequences of scene-object-multi-resolution models, the scene configuration file, the textures and material files, and the scene data list file and uploading the compressed files to a server; downloading, at a client terminal, he scene-object-multi-resolution models, the scene configuration file, the texture and material file, and the scene data list file in ascending order of resolution and rendering the scene simultaneously; dividing, in rendering the scene, a frustum in parallel into a plurality of partitions, generating a shadow map for each frustum, filtering the shadow maps to obtain an anti-aliasing shadowing effect; and the shadow map closest to a viewpoint is updated on a frame-by-frame basis and updating frequency decreases for the shadow maps distant from the viewpoint, wherein the shadow map closest to the viewpoint has the largest size, and the size of the shadow map decreases for the shadow maps distant from the viewpoint.

Подробнее
17-02-2022 дата публикации

Augmented reality systems and methods incorporating wearable pin badges

Номер: US20220051023A1
Автор: Caleb John Paullus
Принадлежит: Pinfinity LLC

Systems and methods disclosed in this application are directed to augmented reality for used with pin badges. Pin badges can be worn, held, or set within view of an AR device having a camera. The AR device sends images or video from its camera to a platform server that determines whether a pin badge exists in view of the camera. If a pin badge exists, it is identified and augmented reality imagery related to the pin badge is transmitted back to the AR device so that the AR device can incorporate that augmented reality imagery into a video stream from its camera as shown on its display.

Подробнее
17-02-2022 дата публикации

Method for Compressing Image Data Having Depth Information

Номер: US20220051445A1
Автор: Hillman Peter M.
Принадлежит:

An image dataset is compressed by combining depth values from pixel depth arrays, wherein combining criteria are based on object data and/or depth variations of depth values in the first pixel image value array and generating a modified image dataset wherein a first pixel image value array represented in a received image dataset by the first number of image value array samples is in turn represented in the modified image dataset by a second number of compressed image value array samples with the second number being less than or equal to the first number. 1. A computer-implemented method for image compression , under control of one or more computer systems configured with executable instructions , the method comprising:obtaining an image dataset in computer-readable form, wherein image data in the image dataset comprises a plurality of pixel image value arrays, wherein a first pixel image value array having a first number of image value array samples each having an image value, a depth value, and an association with an associated pixel position;determining, for the first number of image value array samples, a compressed image;determining, for the first number of image value array samples, a compressed image value array comprising a second number of compressed image value array samples, wherein the second number of compressed image value array samples is less than or equal to the first number of image value array samples and wherein compressed image value array samples are computed based on (1) the first number of image value array samples and (2) combining criteria, wherein the combining criteria are based on object data and/or depth variations of depth values in the first pixel image value array taking into account an error threshold; andgenerating a modified image dataset wherein the first pixel image value array represented in the image dataset by the first number of image value array samples is represented in the modified image dataset by the second number of ...

Подробнее
04-02-2016 дата публикации

IMAGE PROCESSING SYSTEM AND METHOD

Номер: US20160037154A1
Автор: HUNG Yi-Ping, YEH Yen-Ting
Принадлежит: NATIONAL TAIWAN UNIVERSITY

The present invention provides an image processing system and method, the image processing system uses at least two cameras, and the location of the cameras can be changed due to the easiness of installation onto a vehicle and number of the cameras around the vehicle. The present invention uses the image analysis method to evaluate the depth of objects around the vehicle, and then generate a 3D model with depth information to reduce the distortion of the image. After that, the image will be displayed on the wide-area electronic rearview mirror to provide the driver a rearview image more correctly. 1. An image processing system , comprising:a depth value estimation module, which uses an image behind a vehicle and an image on a rear side of the vehicle to evaluate a depth value around the vehicle, and further transfers the information of the depth value to a three-dimensional (3D) geometric model generating module to avoid the images synthesized by a image processing module having the ghosting and high distortion;a three-dimensional geometric model generating module, which uses the information of the depth value to generate a 3D geometric model having the information of the depth value of objects around the vehicle;an image processing module, which synthesizes the 3D geometric model having the information of the depth value of objects around the vehicle with the image behind the vehicle and the image on the rear side of the vehicle;a virtual camera, connected to the image processing module, which decides a display mode of the image synthesized by the image processing module;a display module, which displays an image synthesized by the image processing module and the display mode decided by the virtual camera; anda vision angle detecting module, connected to the display module, which gets a sight direction of a driver from detecting an angle between an electronic rearview mirror and eyes position of the driver, and further changes a display content displayed by the ...

Подробнее
31-01-2019 дата публикации

SELECTING POINTS ON AN ELECTROANATOMICAL MAP

Номер: US20190035141A1
Принадлежит:

Described embodiments include a system that includes a display and a processor. The processor is configured to position an indicator, in response to a positioning input from a user, over a particular point on a three-dimensional electroanatomical map that is displayed on the display, and over which are displayed a plurality of markers that mark respective data points. The processor is further configured to expand a contour, subsequently, along a surface of the map, while a selecting input from the user is ongoing, such that all points on the contour remain equidistant, at an increasing geodesic distance with respect to the surface, from the particular point, and to display, on the display, one or more properties of each of the data points that is marked by a respective one of the markers that is inside the contour. Other embodiments are also described. 1. A system , comprising:a display; and position an indicator, in response to a positioning input from a user, over a particular point on a three-dimensional electroanatomical map that is displayed on the display, and over which are displayed a plurality of markers that mark respective data points;', 'expand a contour, subsequently, along a surface of the map, while a selecting input from the user is ongoing, such that all points on the contour remain equidistant, at an increasing geodesic distance with respect to the surface, from the particular point; and', 'display, on the display, one or more properties of each of the data points that is marked by a respective one of the markers that is inside the contour., 'a processor operatively connected to the display, configured to2. The system according to claim 1 , wherein the indicator includes a cursor.3. The system according to claim 1 , wherein the selecting input includes a click of a button of a mouse.4. The system according to claim 1 , wherein the processor is further configured to display the contour on the display while expanding the contour.5. The system ...

Подробнее
30-01-2020 дата публикации

VISUAL GUIDANCE SYSTEM AND METHOD FOR POSING A PHYSICAL OBJECT IN THREE DIMENSIONAL SPACE.

Номер: US20200035122A1
Принадлежит:

A method and system of visually communicating navigation instructions can use translational and rotational arrow cues (TRAC) defined in an object-centric frame while displaying a single principal view that approximates the human's egocentric view of the actual object. A visual guidance system and method can be used to pose a physical object within three-dimensional (3D) space. Received pose data () indicates a current position and orientation of a physical object within 3D space, such that the pose data can provide a view of the physical object used to generate a virtual view of the physical object in 3D space. At least two of six degrees of freedom (6DoF) error can be calculated () based on a difference between the current position of the physical object and a target pose of the physical object. The 6DoF error can include a three degrees of freedom (3DoF) position error and a 3DoF orientation error which can be used to determine a translation direction and a rotation direction to move the physical object to align a pose of the physical object with the target pose of the physical object within a tolerance. One of both of a translation cue and a rotation cue can be output () to indicate a translation direction or rotation direction to move the physical object in alignment with the target pose. 1. A visual guidance system for posing a physical object within three-dimensional (3D) space , comprising:at least one processor;a memory device including instructions that, when executed by the at least one processor, cause the system to:receive pose data indicating a current position and orientation of the physical object within 3D space;calculate at least two of six degrees of freedom (6DoF) errors based on a difference between the current position and orientation of the physical object and a target pose of the physical object, wherein the at least two of the 6DoF errors are used to determine a translation direction and a rotation direction to move the physical object to ...

Подробнее
12-02-2015 дата публикации

FLOATING 3D IMAGE IN MIDAIR

Номер: US20150042640A1
Автор: Algreatly Cherif Atia
Принадлежит:

A first method is disclosed for projecting an image on random surfaces from a movable projection source to give the image the appearance of a floating three-dimensional image relative to a point of view. A second method is also disclosed for projecting a virtual 3D model on a 3D environment wherein certain virtual objects of the virtual 3D model are projected on certain actual objects of the 3D environment while the source of projecting the virtual 3D model is moving. A third method is disclosed for projecting an image on a transparent surface that can be held by a user's hands wherein the content of the image suits the identity of the objects located behind the transparent surface relative to a point of view. 1. A method for projecting an image on random surfaces from a movable projection source to make the image appears as a floating three-dimensional image relative to a point of view wherein the method comprising:detecting the number, positions, and slopes of the random surfaces located in front of the movable projection source;dividing the image into parts wherein each part of the parts corresponds to a surface of the random surfaces;reforming each part according to the position and parameters of the corresponding surface to generate a reformed part wherein the reformed parts can be projected on the random surfaces to appear as a floating three-dimensional image relative to the point of view; andprojecting the reformed parts from the movable projection source on the random surfaces.2. The method of wherein said movable projection source is a head mounted projector that can be attached to a forehead of a user.3. The method of wherein said point of view is located at the position of a user's eyes.4. The method of wherein said detecting is achieved by a 3D scanner that detects the distance between each point of said random surface and said point of view.5. The method of wherein said detecting is achieved by a position tracking tool for said point of view connected ...

Подробнее
11-02-2016 дата публикации

Surface normal estimation for use in rendering an image

Номер: US20160042551A1
Принадлежит:

Relightable free-viewpoint rendering allows a novel view of a scene to be rendered and relit based on multiple views of the scene from multiple camera viewpoints. Image values from the multiple camera viewpoints can be separated into diffuse image components and specular image components, such that an intrinsic colour component of a relightable texture can be determined for a specular scene, by using the separated diffuse image components. Furthermore, surface normals of geometry in the scene can be refined by constructing a height map based on a conservative component of an initial surface normal field and then determining the refined surface normals based on the constructed height map. 1. A method of determining surface normal estimates for a surface of an object which is visible in one or more images of a scene , wherein the object is represented by geometry constructed from the one or more images of the scene , the method comprising:obtaining surface normal estimates for the surface of the object, the surface normal estimates representing a first surface normal field;constructing a height map for said surface of the object based on a conservative component of the first surface normal field;using the height map to determine refined surface normal estimates for the surface of the object, the refined surface normal estimates representing a conservative surface normal field for said surface of the object; andstoring the refined surface normal estimates for subsequent use in rendering an image of the object.2. The method of wherein said obtaining surface normal estimates for the surface of the object comprises:determining a lighting estimate for the scene;determining surface shading estimates for the surface of the object; anddetermining the surface normal estimates for the surface of the object using the determined lighting estimate and the determined surface shading estimates for the surface of the object.3. The method of wherein said determining the surface normal ...

Подробнее
11-02-2016 дата публикации

Generating a Volumetric Projection for an Object

Номер: US20160042553A1
Принадлежит:

Particular embodiments comprise providing a surface mesh for an object, generating a voxel grid comprising volumetric masks for the mesh, and generating a lit mesh, wherein the lit mesh comprises a shaded version of the mesh as positioned in a scene. The voxel grid may be positioned over the lit mesh in the scene, and a first ray may be traced to a position of the voxel grid. If the traced ray passed through the voxel grid and hit a location on the lit mesh, then one or more second rays may be traced to the hit location on the lit mesh. If the traced ray hit a location in the voxel grid but did not hit a location on the lit mesh, then one or more second rays may be traced from the hit location in the voxel grid to the closest locations on the lit mesh. Finally, color sampled at one or more locations proximate to the position of the voxel grid may be blurred outward through the voxel grid to create a volumetric projection. 1. A method comprising , by one or more computing systems:providing a mesh for an object, wherein the mesh describes a surface geometry of the object;generating a voxel grid comprising volumetric masks for the mesh;generating a lit mesh, wherein the lit mesh comprises a shaded version of the mesh as positioned in a scene;positioning the voxel grid over the lit mesh in the scene;tracing a first ray to a position of the voxel grid; andblurring color sampled at one or more locations proximate to the position of the voxel grid outward through the voxel grid to create a volumetric projection.2. The method of claim 1 , further comprising:culling one or more portions of the voxel grid to spatially reduce the volumetric projection with respect to those portions.3. The method of claim 1 , further comprising:rendering the lit mesh as invisible.4. The method of claim 1 , further comprising:calculating viewpoint location and direction with respect to the scene.5. The method of claim 1 , further comprising:determining a sampling pattern based on targeted areas ...

Подробнее
11-02-2016 дата публикации

Determining Diffuse Image Component Values for Use in Rendering an Image

Номер: US20160042556A1
Принадлежит:

Relightable free-viewpoint rendering allows a novel view of a scene to be rendered and relit based on multiple views of the scene from multiple camera viewpoints. Image values from the multiple camera viewpoints can be separated into diffuse image components and specular image components, such that an intrinsic colour component of a relightable texture can be determined for a specular scene, by using the separated diffuse image components. Furthermore, surface normals of geometry in the scene can be refined by constructing a height map based on a conservative component of an initial surface normal field and then determining the refined surface normals based on the constructed height map. 1. A method of determining an intrinsic colour component of a relightable texture for a scene which includes specular components , wherein a plurality of images of the scene are captured from a respective plurality of camera viewpoints and projected onto geometry which represents objects in the scene , the method comprising:determining, at each of a plurality of the sample positions of the texture, a diffuse image component value based on the minimum of the image values at the sample position from a set of a plurality of the images;using the diffuse image component values to determine the intrinsic colour component of the relightable texture; andstoring the intrinsic colour component of the relightable texture for subsequent use in rendering an image of the scene from a rendering viewpoint under arbitrary lighting conditions.2. The method of further comprising:receiving the plurality of images of the scene which have been captured from a respective plurality of camera viewpoints;analysing the images of the scene to construct the geometry representing the objects in the scene; andprojecting the images onto the geometry.3. The method of wherein said projecting the images onto the geometry comprises warping at least one of the images such that the projected images are better aligned ...

Подробнее
09-02-2017 дата публикации

SYSTEM AND METHOD FOR AUTOMATIC ALIGNMENT AND PROJECTION MAPPING

Номер: US20170039756A1
Принадлежит:

A system and method for automatic alignment and projection mapping are provided. A projector and at least two cameras are mounted with fields of view that overlap a projection area on a three-dimensional environment. A computing device: controls the projector to project structured light patterns that uniquely illuminate portions of the environment; acquires images of the patterns from the cameras; generates a two-dimensional mapping of the portions between projector and camera space and by processing the images and correlated patterns; generates a cloud of points representing the environment using the mapping and camera positions; determines a projector location, orientation and lens characteristics from the cloud; positions a virtual camera relative to a virtual three-dimensional environment, corresponding to the environment, parameters of the virtual camera respectively matching parameters of the projector; and, controls the projector to project based on a virtual location, orientation and characteristics of the virtual camera. 1. A system comprising: control the projector to sequentially project one or more structured light patterns configured to uniquely illuminate different portions of the three-dimensional environment;', 'acquire one or more respective images from each of the at least two cameras while the projector is projecting the one or more structured light patterns, each of the one or more respective images correlated with a given respective structured light pattern;', 'generate a two-dimensional mapping of the different portions of the three-dimensional environment between a projector space and a camera space by processing the respective images and correlated given respective structured light patterns;', 'generate a cloud of points representing the three-dimensional environment using the two-dimensional mapping and given positions of the at least two cameras relative to the three-dimensional environment;', 'determine a location, an orientation and lens ...

Подробнее
12-02-2015 дата публикации

METHODS AND APPARATUS FOR POINT CLOUD DATA PROCESSING

Номер: US20150046456A1
Принадлежит:

Methods and apparatus are provided for processing data representing three-dimensional points organized in a data structure wherein each point has multiple components, the data is organized in a respective layer per component, each layer is segmented in cells of a two-dimensional grid, the cells are arranged such that the components of a given point are contained in corresponding cells of multiple layers, the cells are grouped in patches by layer, and the patches are arranged such that the components of an array of points is represented by corresponding patches of multiple layers. At least one first criterion and at least one second criterion are obtained. Data are retrieved from cells of patches meeting the at least one first criterion and from layers meeting the at least one second criterion. The retrieved data are processed to obtain a derivative data set. 1. A method of processing at least one set of data representing three-dimensional points organized in a data structure wherein , for each set ,each three-dimensional point has multiple components,the data is organized in a respective layer per component,each layer is segmented in cells of a two-dimensional grid,the cells are arranged such that the components of a given point are contained in corresponding cells of multiple layers,the cells are grouped in patches by layer, andthe patches are arranged such that the components of an array of points is represented by corresponding patches of multiple layers, comprising:a. obtaining at least one first criterion;b. obtaining at least one second criterion;c. retrieving data from cells of patches meeting the at least one first criterion of layers meeting the at least one second criterion,d. processing the retrieved data to obtain a derivative data set, ande. storing the derivative data set.2. The method of claim 1 , wherein the layers comprise complete layers and preview layers claim 1 , wherein each preview layer has cells containing data from a subset of the cells of ...

Подробнее
24-02-2022 дата публикации

THREE DIMENSIONAL STRUCTURAL PLACEMENT REPRESENTATION SYSTEM

Номер: US20220058862A1
Принадлежит:

A system for providing a 3D representation of a desired structure on a selected property. A processing arrangement receives a property indication of the selected property. At least one converter converts the property indication to a format representing a 3D graphical representation of the selected property. A mapper maps the selected property to a largest available polygon that will fit on the selected property. The system determines a resultant polygon from the largest available polygon based on local restrictions for the selected property. A user selects the desired structure from available structures and orients the desired structure on the resultant polygon at a permissible orientation. A renderer renders the desired structure on the selected property in 3D graphical form. As a result, users can observe the house at different angles around the property, configure the structure with desired external design features, and view the result on the selected property. 1. A system for providing a three-dimensional representation of a desired structure on a selected property , comprising:a processing arrangement configured to receive a property indication of the selected property;at least one converter configured to convert the property indication to a standardized three-dimensional visual format representing a three-dimensional graphical representation of the selected property;a mapper configured to map the property to a largest available polygon that will fit on the selected property;a zoning setback computation element configured to determine a resultant polygon from the largest available polygon based on local governmental zoning restrictions for the selected property;location hardware configured to enable a user to select the desired structure, the desired structure selectable from one of a plurality of available structures, and orient the desired structure on the resultant polygon at an available and permissible orientation; anda renderer configured to render the ...

Подробнее
06-02-2020 дата публикации

INTERFACE-BASED MODELING AND DESIGN OF THREE DIMENSIONAL SPACES USING TWO DIMENSIONAL REPRESENTATIONS

Номер: US20200042082A1
Автор: CRISPIN Sterling
Принадлежит: DAQRI LLC

Interface-based modeling and design of three dimensional spaces using two dimensional representations are provided herein. An example method includes converting a three dimensional space into a two dimensional space using a map projection schema, where the two dimensional space is bounded by ergonomic limits of a human, and the two dimensional space is provided as an ergonomic user interface, receiving an anchor position within the ergonomic user interface that defines a placement of an asset relative to the three dimensional space when the two dimensional space is re-converted back to a three dimensional space, and re-converting the two dimensional space back into the three dimensional space for display along with the asset, within an optical display system. 1. A method , comprising:converting a three dimensional space into a two dimensional space using a map projection schema, wherein the two dimensional space is bounded by ergonomic limits of a human, wherein the two dimensional space is provided as an ergonomic user interface;receiving an anchor position within the ergonomic user interface that defines a placement of an asset relative to the three dimensional space when the two dimensional space is re-converted back to a three dimensional space; andre-converting the two dimensional space back into the three dimensional space for display along with the asset, within an optical display system.2. The method according to claim 1 , further comprising generating an ergonomic user interface that comprises the two dimensional space claim 1 , wherein the ergonomic user interface comprises:the asset overlaid upon the two dimensional space; anda plurality of indicia that indicate viewing angles for a viewer that are based on the ergonomic limits that are selected from any of eye rotation, head rotation, natural lines of sight of a viewer, and any combinations thereof.3. The method according to claim 1 , wherein the map projection schema optimizes the two dimensional space ...

Подробнее
07-02-2019 дата публикации

DEEP GEOMETRIC MODEL FITTING

Номер: US20190043244A1
Принадлежит: Intel Corporation

Systems, apparatuses and methods may provide for technology that generates, by a first neural network, an initial set of model weights based on input data and iteratively generates, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data. Additionally, the technology may output a geometric model of the input data based on the updated set of model weights. In one example, the first neural network and the second neural network reduce the dependence of the geometric model on the number of data points in the input data. 1. A semiconductor apparatus comprising:one or more substrates; and generate, by a first neural network, an initial set of model weights based on input data,', 'iteratively generate, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data, wherein the second neural network is to learn one or more regularities associated with the input data, and wherein the one or more regularities include motion patterns, and', 'output a geometric model of the input data based on the updated set of model weights, wherein the first neural network and the second neural network are to reduce a dependence of the geometric model on a number of data points in the input data., 'logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to2. The apparatus of claim 1 , wherein the updated set of model weights are to be generated further based on one or more of line fitting constraints or data correspondence likelihoods.3. The apparatus of claim 1 , wherein the geometric model is to be one or more of a hyperplane fit model claim 1 , a fundamental matrix estimation model or a homography estimation model.4. The apparatus of claim 1 , ...

Подробнее
07-02-2019 дата публикации

View dependent 3d reconstruction mechanism

Номер: US20190043253A1
Автор: Blake C. LUCAS, Jill Boyce
Принадлежит: Intel Corp

An apparatus to facilitate encoding of point cloud data is disclosed. The apparatus includes one or more processors to receive point cloud data including a plurality of images and camera parameters, generate encoded point cloud data including a color texture image and a depth image having cropped regions for each of the plurality of images, and metadata to describe the camera parameters and a mapping of the cropped regions to a real coordinate space.

Подробнее
06-02-2020 дата публикации

AUDIO PROCESSING

Номер: US20200042792A1
Принадлежит:

A method comprising: dividing a virtual space using virtual partitions that affect perception of the virtual space by a user within the virtual space; in response to a first action in the virtual space relative to a first virtual partition by a user making a first change to how the first virtual partition affects the virtual space perceived by the user. 127-. (canceled)28. An apparatus comprising:at least one processor; andat least one memory comprising computer program code,the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:divide a virtual space using one or more virtual partitions that affect rendering of the virtual space to affect perception of the virtual space by a user within the virtual space;in response to a first action in the virtual space relative to at least a first virtual partition by a user make a first change to how at least the first virtual partition affects the virtual space perceived by the user.29. The apparatus of claim 28 , wherein the one or more virtual partitions define a plurality of scenes demarcated at least partly by the one or more virtual partitions claim 28 , wherein a scene is at least partially separated from an adjacent scene by at least one partition and wherein each scene comprises different sound objects.30. The apparatus of further caused to render audio associated with a current scene to the user at a greater volume than audio associated with any other scene claim 29 , wherein the current scene is determined by a position of a user in the virtual space claim 29 , the current scene being the scene within which the user is currently located.31. The apparatus of further caused to render audio associated with a first scene different to the current scene to the user at an increased volume in dependence upon an increased user proximity to a virtual partition dividing the current scene from the first scene.32. The apparatus of ...

Подробнее
15-02-2018 дата публикации

METHOD AND DEVICE FOR HARDNESS TESTING

Номер: US20180045629A1
Автор: HOELL Robert
Принадлежит:

In a method and device for setting one or more measuring points on a specimen in a specimen holder for automated hardness testing in a hardness-testing device, the hardness-testing device has a table, a tool holder with a penetrator and at least one lens. The specimen holder with the specimen is positioned on the table in the x- and y-directions. The table and/or tool holder can be moved in the z-direction, relative to one another. A virtual three-dimensional model of the specimen holder and specimen is selected from data storage, and the model and/or an overview image of the specimen is depicted on a screen. Then, a point is marked in the image, and one or more measuring point is/are automatically defined based on the measuring method selected. To each measuring point, the z-coordinate is automatically assigned in the hardness-testing device based on its x- and y-coordinates and virtual model. 113541123111215742231rrr. Method for setting one or more measuring points () on at least one specimen () in a specimen holder () for automated hardness testing in a hardness-testing device () , whereby the hardness-testing device () has a table () , a tool holder () with at least one penetrator () and at least one lens () , and optionally an overview image camera () and a screen () , whereby the specimen holder () can be positioned on the table () in the x- and y-directions , and whereby the table () and/or the tool holder () of the hardness-testing device () can be moved in the z-direction , relative to one another , the method comprising:{'b': 9', '4', '5, 'i': v', 'v, 'a) Selection and provision of a virtual three-dimensional model () of the specimen holder () with at least one specimen (), arranged thereon, from an electronic data storage device,'}{'b': 4', '5', '2, 'i': r', 'r, 'b) Positioning of the specimen holder (), equipped with the specimen (), on the table (),'}{'b': 9', '5', '7, 'i': 'r', 'c) Automatic depiction of the model () and/or an overview image, prepared ...

Подробнее
18-02-2016 дата публикации

SCHEDULE DISPLAY DEVICE, SCHEDULE DISPLAY METHOD, AND SCHEDULE DISPLAY PROGRAM

Номер: US20160048996A1
Автор: NAKAO Yusuke
Принадлежит:

A schedule display device includes a display processing unit () for displaying a schedule table by use of an image indicating a 3D object on a screen of a portable terminal, and the display processing unit () displays schedule items on the front side of the object to be wider width per unit time. 1. A schedule display device comprising:a display processing unit for displaying a schedule table by use of an image indicating a 3D object on a screen of a portable terminal,wherein the display processing unit displays schedule items on the front side of the object to be wider width per unit time.2. The schedule display device according to claim 1 ,wherein the display processing unit makes a part on the front side of a predetermined position in a schedule table into an active state in which schedule items are operable, and makes other part in the schedule table into an inactive state in which the schedule items are not operable.3. The schedule display device according to claim 2 ,wherein when a schedule item in the active state other than its edge is operated, the display processing unit displays more detailed contents of the schedule item than the already-displayed contents thereof.4. The schedule display device according to claim 2 ,wherein when an edge of a schedule item in the active state is operated, the display processing unit changes a time zone of the schedule item.5. The schedule display device according to claim 2 ,wherein when a part in the inactive state is operated, the display processing unit changes a display form of a 3D object.6. The schedule display device according to claim 5 ,wherein the display processing unit displays a schedule table by use of an image indicating a 3D object on a screen of a portable terminal operated via a touch panel, and when a part in the inactive state is operated to trace, rotates the 3D object along the trace direction thereby to change a time zone to be displayed on the front side.7. A schedule display method comprising: ...

Подробнее
15-02-2018 дата публикации

VIRTUAL MAPPING OF FINGERPRINTS FROM 3D TO 2D

Номер: US20180047206A1
Принадлежит:

A non-parametric computer implemented system and method for creating a two dimensional interpretation of a three dimensional biometric representation. The method comprises: obtaining with a camera a three dimensional (3D) representation of a biological feature; determining a region of interest in the 3D representation; selecting an invariant property for the 3D region of interest; identifying a plurality of minutiae in the 3D representation; mapping a nodal mesh to the plurality of minutiae; projecting the nodal mesh of the 3D representation onto a 2D plane; and mapping the plurality of minutiae onto the 2D representation of the nodal mesh. The 2D representation of the plurality of minutiae has a property corresponding to the invariant property in the 3D representation; and the value of the corresponding property in the 2D projection matches the invariant property in the 3D representation. 1. A non-parametric computer implemented method for creating a two dimensional interpretation of a three dimensional biometric representation , the method comprising:obtaining a three dimensional (3D) representation of a biological feature;determining a region of interest in the 3D representation;identifying a plurality of minutiae in the 3D region of interest;mapping a nodal mesh to the plurality of minutiaeprojecting the nodal mesh of the 3D region of interest onto a 2D plane to create a 2D representation of the nodal mesh; andmapping the plurality of minutiae onto the 2D representation of the nodal mesh;wherein a surface area of the 3D region of interest matches a surface area of the 2D representation of the plurality of minutiae.2. The method of wherein the 3D representation of a biological feature is obtained from one or more 3D optical scanners.3. The method of claim 1 , wherein the features are at least one of: ridges claim 1 , valleys and minutiae.4. The method of claim 1 , wherein the identifying step uses linear filtering of either geometric or texture features.5. The ...

Подробнее
03-03-2022 дата публикации

ELECTRONIC DEVICE, METHOD, AND COMPUTER PROGRAM FOR CALCULATING BLEEDING SITE AND TRAJECTORY OF BLOODSTAIN SCATTERED BY IMPACT

Номер: US20220067937A1
Принадлежит:

A method of calculating a bleeding site and a trajectory of a bloodstain scattered by impact includes: obtaining, by an electronic device calculating the bleeding site and the trajectory of the bloodstain scattered by impact, captured image information; analyzing the obtained captured image information, by the electronic device, and calculating a collision angle and measuring a direction angle of the bloodstain scattered by impact; calculating, by the electronic device, a coordinate value of the bleeding site by using a linear trajectory method; calculating, by the electronic device, a trajectory of a drop of blood based on the calculated coordinate value of the bleeding site by using a parabolic trajectory method considering gravity and air resistance; and displaying and outputting, by the electronic device, a linear trajectory of the drop of blood and a parabolic trajectory considering gravity and air resistance of the drop of blood on a 3D space. 1. A method of calculating a bleeding site and a trajectory of a bloodstain scattered by impact , the method comprising:obtaining, by an electronic device calculating the bleeding site and the trajectory of the bloodstain scattered by impact, captured image information including a captured image and a three-dimensional (3D) coordinate value of each of two or more bloodstains scattered by impact;analyzing the obtained captured image information, by the electronic device, and calculating a collision angle and measuring a direction angle of the bloodstain scattered by impact;calculating, by the electronic device, a coordinate value of the bleeding site by using a linear trajectory method;calculating, by the electronic device, a trajectory of a drop of blood based on the calculated coordinate value of the bleeding site by using a parabolic trajectory method considering gravity and air resistance; anddisplaying and outputting, by the electronic device, a linear trajectory of the drop of blood and a parabolic trajectory ...

Подробнее
03-03-2022 дата публикации

DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF A SCENE

Номер: US20220068024A1
Принадлежит:

One or more images (e.g., images taken from one or more cameras) may be received, where each of the one or more images may depict a two-dimensional (2D) view of a three-dimensional (3D) scene. Additionally, the one or more images may be utilized to determine a three-dimensional (3D) representation of a scene. This representation may help an entity navigate an environment represented by the 3D scene.

Подробнее
25-02-2021 дата публикации

INTERFACE-BASED MODELING AND DESIGN OF THREE DIMENSIONAL SPACES USING TWO DIMENSIONAL REPRESENTATIONS

Номер: US20210055788A1
Автор: CRISPIN Sterling
Принадлежит: RPX Corporation

Interface-based modeling and design of three dimensional spaces using two dimensional representations are provided herein. An example method includes converting a three dimensional space into a two dimensional space using a map projection schema, where the two dimensional space is bounded by ergonomic limits of a human, and the two dimensional space is provided as an ergonomic user interface, receiving an anchor position within the ergonomic user interface that defines a placement of an asset relative to the three dimensional space when the two dimensional space is re-converted back to a three dimensional space, and re-converting the two dimensional space back into the three dimensional space for display along with the asset, within an optical display system. 1. A method , comprising:converting a three dimensional shape space into a two dimensional space;generating, for display, an ergonomic user interface comprising the two dimensional space, wherein the two dimensional space is bounded by ergonomic limits of a human, the ergonomic limits being identified on the two dimensional space using indicia that reference various viewing angles for the human relative to the three dimensional shape space;receiving an anchor position within the ergonomic user interface that defines a placement of an asset relative to the three dimensional shape space when the two dimensional space is re-converted back to a three dimensional space, the anchor position being placed within a space defined by the indicia, the space being indicative of the ergonomic limits of the human, wherein the anchor point ensures that the asset is placed in a line of sight of a viewer;re-converting the two dimensional space back into the three dimensional shape space for display along with the asset, within an optical display system; anddisplaying the re-converted three dimensional shape space in the optical display system, the re-converted three dimensional shape space comprising the asset located in the ...

Подробнее
25-02-2016 дата публикации

Time-Continuous Collision Detection Using 3D Rasterization

Номер: US20160055666A1
Принадлежит: Individual

We present a technique that utilizes a motion blur (three dimensional) rasterizer to augment the PCS culling technique so that it can be used for continuous collision detection, which to the best of our knowledge has not been done before for motion blur using a graphics processor.

Подробнее
10-03-2022 дата публикации

PATIENT POSITIONING USING A SKELETON MODEL

Номер: US20220071708A1
Принадлежит:

First and second skeleton model data is determined based on first and second surface data of a patient. Each of the skeleton model data describes geometries of rigid anatomic structures of a patient at a different point in time. Skeleton difference data is determined describing differences between the geometries of the rigid anatomic structures. In a next step, movement instruction data is determined which describes movement to be performed by the rigid anatomic structures to minimize the differences, i.e. to correct the posture of the patient. The movement instruction data is for example determined based on anatomy constraint data which describes anatomical movement constraints for the rigid anatomic structures (e.g. range of motion of a joint). An instruction is displayed (e.g. using augmented reality), guiding the user how to move the rigid anatomic structures so as to correct the patient's posture. 1. A computer-implemented method for determining a movement instruction for adjusting a pose of a body part of an associated patient , the method comprising:acquiring first three-dimensional surface data that describes an outer three-dimensional contour of the body part of the associated patient imaged at a first point in time in a first spatial reference system to generate the first three-dimensional surface data in the first spatial reference system;determining first skeleton model data based on the first three-dimensional surface data, wherein the first skeleton model data describes a first set of geometries of one or more rigid anatomic structures of the patient;acquiring second three-dimensional surface data that describes the outer three-dimensional contour of the body part of the associated patient imaged at a second point in time in a second spatial reference system to generate the second three-dimensional surface data in the second spatial reference system;determining second skeleton model data based on the second three-dimensional surface data, wherein the ...

Подробнее
05-03-2015 дата публикации

Three-Dimensional Semiconductor Image Reconstruction Apparatus and Method

Номер: US20150060669A1
Принадлежит:

A system comprises an electron beam directed toward a three-dimensional object with one tilting angle and at least two azimuth angles, a detector configured to receive a plurality of scanning electron microscope (SEM) images from the three-dimensional object and a processor configured to calculate a height and a sidewall edge of the three-dimensional object. 1. A system comprising:an electron beam directed toward a three-dimensional object with one tilting angle and at least two azimuth angles;a detector configured to receive a plurality of scanning electron microscope (SEM) images from the three-dimensional object; anda processor configured to calculate a height and a sidewall edge of the three-dimensional object based upon the SEM images.2. The system of claim 1 , wherein: {'br': None, 'sub': 'SW', 'sup': '−1', 'i': 'H/E', 'θ=tan()'}, 'the processor is configured to calculate a sidewall angle of the three-dimensional object using a first function, wherein the first function is{'sub': 'SW', 'where θis the sidewall angle of the three-dimensional object, H is the height of the three-dimensional object and E is the sidewall edge of the three-dimensional object.'}5. The system of claim 1 , further comprising:a plurality of multi-directional apertures on the detector, wherein the multi-directional apertures are tilted apertures.6. The system of claim 1 , wherein:the three-dimensional object is a fin region of a FinFET.7. A system comprising:an electron beam configured to scan across a three-dimensional object with one tilting angle and at least two azimuth angles;a detector configured to receive a plurality of scanning electron microscope (SEM) images from the three-dimensional object; anda processor configured to calculate dimensions of the three-dimensional object based upon the SEM images.8. The system of claim 7 , wherein:the processor comprises a filtering algorithm, an image segmentation algorithm, a corner detection algorithm, a gate reconstruction algorithm and ...

Подробнее
05-03-2015 дата публикации

CONTOUR GRADIENTS USING THREE-DIMENSIONAL MODELS

Номер: US20150062115A1
Автор: Asente Paul John
Принадлежит: ADOBE SYSTEMS INCORPORATED

A method and systems of applying a contour gradient to a two-dimensional path are provided. A three-dimensional polygonal shell may be constructed from the two-dimensional path. Then the three-dimensional polygonal shell may be projected into two dimensions, resulting in a two-dimensional projected model, while saving values for a third dimension for each point in the two-dimensional projected model. Then a range of all values for the third dimension in the two-dimensional projected model is determined from the saved values. The range can then be mapped to a visual attribute. The two-dimensional projected model may be displayed using the mapped visual attribute. 1. A computerized method of applying a contour gradient to a two-dimensional path , the method comprising:constructing a three-dimensional polygonal shell from the two-dimensional path;projecting the three-dimensional polygonal shell into two dimensions, resulting in a two-dimensional projected model, while saving values for a third dimension for each point in the two-dimensional projected model;determining a range of all values for the third dimension in the two-dimensional projected model from the saved values;mapping the range to a visual attribute; anddisplaying the two-dimensional projected model using the mapped visual attribute.2. The computerized method of claim 1 , wherein the constructing includes creating a bevel at a set angle for the two-dimensional path.3. The computerized method of claim 2 , wherein the set angle is approximately 45 degrees.4. The computerized method of claim 1 , wherein the constructing includes projecting the two-dimensional path around a half-cylinder.5. The computerized method of claim 1 , wherein the projecting includes removing a third dimension value for each point in the three-dimensional polygonal shell.6. The computerized method of claim 1 , wherein the visual attribute is opacity.7. The computerized method of claim 1 , wherein the visual attribute is color.8. The ...

Подробнее
05-03-2015 дата публикации

SYSTEMS AND METHODS FOR RAPIDLY GENERATING A 3-D MODEL OF A USER

Номер: US20150062116A1
Автор: Coon Jonathan
Принадлежит: 1-800 CONTACTS, INC.

A computer-implemented method for generating a three-dimensional (3-D) model of a user. A first set of images are captured. Prior to capturing a second set of images the first set of images are processed using a 3-D modeling process, resulting in 3-D data and scaled data derived from the first set of images. The second set of images are captured after processing the first set of images. A second 3-D model of the user is generated using the second set of images of the user and the 3-D data derived from processing the first set of images. A feature of the user is tracked in real time based at least in part on the 3-D data derived from processing the first set of images. 1. A computer-implemented method for generating a three-dimensional (3-D) model of a user , the method comprising:capturing a plurality of images of a user; andgenerating a 3-D model of the user using the captured plurality of images of the user and 3-D data derived from processing a previously captured plurality of images of the user.2. The method of claim 1 , further comprising:tracking a feature of the user in real time based at least in part on the 3-D data derived from processing the previously captured plurality of images.3. The method of claim 1 , further comprising:capturing the previously captured plurality of images, wherein the previously captured plurality of images are processed prior to capturing the plurality of images.4. The method of claim 3 , further comprising:deriving scaling data to scale the 3-D data from a scaling image of the user, the scaling image of the user being captured in conjunction with the capturing of the previously captured plurality of images.5. The method of claim 4 , further comprising:scaling the 3-D model of the user using the scaling data derived from the scaling image of the user.6. The method of claim 4 , further comprising:prior to capturing the plurality of images of the user, performing a 3-D modeling process on the previously captured plurality of images ...

Подробнее
05-03-2015 дата публикации

METHOD AND DEVICE FOR GENERATING A 3D REPRESENTATION OF A USER INTERFACE IN A VEHICLE

Номер: US20150062118A1
Принадлежит: Audi AG

A method for generating a 3D representation of a user interface in a vehicle, in which a scene, in particular a moving scene, containing at least one 3D object is rendered by a computing device inside the vehicle in order to determine the 3D representation. 115-. (canceled)16. A method for generating a 3D representation for a user interface in a vehicle , comprising:using a computing device inside the vehicle to render at least one object and respectively produce at least one 3D object; andusing the computing device inside the vehicle to generate a moving scene containing the at least one 3D object in order to determine the 3D representation.17. The method as claimed in claim 16 , whereina user operating action produces a user input, andupon receipt of the user input, the scene is changed and the at least one 3D object is rendered again.18. The method as claimed in claim 17 , wherein the at least one 3D object is rendered again in real time.19. The method as claimed in claim 17 , wherein the scene is changed by at least one of rotating claim 17 , translating claim 17 , scaling and changing opacity of the at least one 3D object and/or by adding or removing a temporary 3D object.20. The method as claimed in claim 16 , whereinthe computing device inside the vehicle renders a plurality of 3D objects to generate a 3D representation of each,the user interface comprises a menu having a plurality of menu items, andeach menu item is represented by a corresponding 3D object.21. The method as claimed in claim 20 , whereinthe 3D objects are presented as if they were on a turntable,as different menu items are activated, the 3D objects are rotated with respect to each other in a turntable fashion, andthe 3D objects are rendered again as they rotate to change scaling and angle of illumination.22. The method as claimed in claim 20 , whereina 3D texture is incorporated into the 3D object corresponding to a currently active menu item.23. The method as claimed in claim 20 , whereina ...

Подробнее
05-03-2015 дата публикации

GENERATING A 3D INTERACTIVE IMMERSIVE EXPERIENCE FROM A 2D STATIC IMAGE

Номер: US20150062125A1
Принадлежит:

A two-dimensional (2D) static image may be used to generate a three-dimensional (3D) interactive immersive experience. An image type of the 2D image may first be identified. The image type may be selected from a set of types such as interior, exterior, people, corridor, landscape, and other. Each image type may have an associated main feature type. The main feature of the 2D image may be identified using the corresponding main feature type. Then, unless the 2D image is of the “other” image type, a 3D object with two or more planes may be generated. The planes may intersect on the identified main feature. A virtual camera may be positioned proximate the 3D object, and a 3D view of the 3D object may be generated and displayed for the user. The user may optionally move the virtual camera, within limits, to view the 3D object from other locations. 1. A computer-implemented method for generating a three-dimensional view from a first two-dimensional image , the method comprising:receiving a first two-dimensional image;identifying, from a plurality of image types, a first image type of the two-dimensional image;identifying, within the first two-dimensional image, a first main feature having a first main feature type associated with the first image type;at a processor, generating a first plane;at the processor, generating a second plane that intersects the first plane at a line on the first main feature;at the processor, applying the first two-dimensional image to the first plane and the second plane to define a first three-dimensional object;at the processor, positioning a virtual camera at a first camera position proximate the first three-dimensional object;at the processor, generating a first three-dimensional view of the first three-dimensional object from the virtual camera; andat a display screen, displaying the first three-dimensional view.2. The computer-implemented method of claim 1 , wherein identifying the first image type comprises claim 1 , at an input device ...

Подробнее
03-03-2016 дата публикации

RENDERING APPARATUS AND METHOD

Номер: US20160063752A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A rendering method includes receiving resolution information including an optimal resolution for rendering images constituting a frame, a number of multi-samples, and resolution factors of the respective images, rendering the images at the optimal resolution, and adjusting a resolution of each of the rendered images based on the resolution factors and the number of multi-samples. 1. A rendering method comprising:receiving resolution information comprising an optimal resolution for rendering images constituting a frame, a number of multi-samples, and resolution factors of the respective images;rendering the images at the optimal resolution; andadjusting a resolution of each of the rendered images based on the resolution factors and the number of multi-samples.2. The rendering method of claim 1 , wherein the rendering comprises:shading pixels of the images at the optimal resolution; andperforming multi-sampling such that each of the pixels comprises as many samples as the number of multi-samples.3. The rendering method of claim 1 , wherein the adjusting comprises:merging, into a pixel, a predetermined number of samples included in one of the rendered images, based on a corresponding one of the resolution factors and the number of multi-samples.4. The rendering method of claim 3 , wherein the adjusting further comprises:determining an arithmetic average of the predetermined number of samples as a pixel value of the merged pixel.5. The rendering method of claim 3 , wherein the predetermined number of samples is equal to a value obtained by dividing the number of multi-samples by the corresponding one of the resolution factors.6. A rendering method comprising:receiving required resolutions of respective images constituting a frame;extracting any one or any combination of an optimal resolution for rendering the images, resolution factors of the respective images, and a number of multi-samples, based on the required resolutions;rendering the images at the optimal ...

Подробнее
01-03-2018 дата публикации

GRAPHICS PROCESSING SYSTEMS AND GRAPHICS PROCESSORS

Номер: US20180061115A1
Принадлежит:

A graphics processing system includes a graphics processing pipeline, which includes a primitive generation stage and a pixel processing stage. The graphics processing system is arranged to process input data in the primitive generation stage to produce first primitive data associated with a first view of a scene and second primitive data associated with a second view of the scene. The graphics processing system is arranged to process the first primitive data in the pixel processing stage to produce first pixel-processed data associated with the first view. The graphics processing system is arranged to determine, for second pixel-processed data associated with the second view, whether to use the first pixel-processed data as the second pixel-processed data or whether to process the second primitive data in the pixel processing stage to produce the second pixel-processed data, and perform additional processing in the graphics processing pipeline based on the determining. 1. A method of operating a graphics processing system , the graphics processing system comprising a graphics processing pipeline comprising a primitive generation stage and a pixel processing stage , the method comprising:processing input data in the primitive generation stage to produce first primitive data associated with a first view of a scene and second primitive data associated with a second view of the scene;processing the first primitive data in the pixel processing stage to produce first pixel-processed data associated with the first view;determining, for second pixel-processed data associated with the second view, whether to use the first pixel-processed data as the second pixel-processed data or whether to process the second primitive data in the pixel processing stage to produce the second pixel-processed data; andperforming additional processing in the graphics processing pipeline based on the determining.2. The method of claim 1 , wherein the first primitive data is associated with a ...

Подробнее
20-02-2020 дата публикации

DENSE THREE-DIMENSIONAL CORRESPONDENCE ESTIMATION WITH MULTI-LEVEL METRIC LEARNING AND HIERARCHICAL MATCHING

Номер: US20200058156A1
Принадлежит:

A method for estimating dense 3D geometric correspondences between two input point clouds by employing a 3D convolutional neural network (CNN) architecture is presented. The method includes, during a training phase, transforming the two input point clouds into truncated distance function voxel grid representations, feeding the truncated distance function voxel grid representations into individual feature extraction layers with tied weights, extracting low-level features from a first feature extraction layer, extracting high-level features from a second feature extraction layer, normalizing the extracted low-level features and high-level features, and applying deep supervision of multiple contrastive losses and multiple hard negative mining modules at the first and second feature extraction layers. The method further includes, during a testing phase, employing the high-level features capturing high-level semantic information to obtain coarse matching locations, and refining the coarse matching locations with the low-level features to capture low-level geometric information for estimating precise matching locations. 1. A computer-implemented method executed on a processor for estimating dense three-dimensional (3D) geometric correspondences between two input point clouds by employing a 3D convolutional neural network (CNN) architecture , the method comprising: transforming the two input point clouds into truncated distance function voxel grid representations;', 'feeding the truncated distance function voxel grid representations into individual feature extraction layers with tied weights;', 'extracting low-level features from a first feature extraction layer;', 'extracting high-level features from a second feature extraction layer;', 'normalizing the extracted low-level features and high-level features to obtain unit vector features; and', 'applying deep supervision of multiple contrastive losses and multiple hard negative mining modules at the first and second feature ...

Подробнее
04-03-2021 дата публикации

PSEUDO RGB-D FOR SELF-IMPROVING MONOCULAR SLAM AND DEPTH PREDICTION

Номер: US20210065391A1
Принадлежит:

A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device. 1. A computer-implemented method executed on a processor for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) , the method comprising:capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera;feeding the RGB images into a depth estimation/refinement module;outputting depth maps;feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images;outputting camera poses and point clouds; andconstructing a 3D map of a surrounding environment displayed on a visualization device.2. The method of claim 1 , wherein common tracked keypoints from neighboring keyframes are employed.3. The method of claim 2 , wherein a symmetric depth transfer loss and a depth consistency loss are imposed.4. The method of claim 3 , wherein the symmetric depth transfer loss is given as:{'br': None, 'img': {'@id': 'CUSTOM-CHARACTER-00023', '@he': '3.22mm', '@wi': '5.67mm', '@file': 'US20210065391A1-20210304-P00020.TIF', '@alt': 'custom-character', '@img-content': 'character', '@img-format': 'tif'}, 'i': w', 'd', 'w', 'd', 'w', 'd', 'w', 'd', 'w, 'sub': c→k1', 'k1', 'k1→c', 'c, 'sup': i', 'i', 'i', 'i, ...

Подробнее
12-03-2015 дата публикации

COMPUTING DEVICE AND METHOD FOR RECONSTRUCTING CURVED SURFACE OF POINT CLOUD DATA

Номер: US20150070354A1
Принадлежит:

In a method for reconstructing a curved surface of point cloud data using a computing device, point cloud data, a preset point distance is acquired and defined. A neighborhood point set for each point is calculated. The neighborhood point set of each point is fitted to be a plane, and a normal vector of the plane corresponding to each point is calculated. One or more singularity points in the neighborhood point set of each point is confirmed and corrected. A projection point set of each point is obtained by projecting the neighborhood points in the corrected neighborhood point set to the plane of each point. The projection point set of each point are meshed into triangles and the curved surface is reconstructed by integrating the plurality of triangles corresponding to the projection point set of each point. 1. A computer-implemented method for reconstructing a curved surface of point cloud data of an object using a computing device , the method comprising:acquiring the point cloud data from a storage system of the computing device, and defining a preset point distance and a determination parameter of a singularity point in the point cloud data;calculating a neighborhood point set for each point in the point cloud data according to the preset point distance;fitting the neighborhood point set of each point in the point cloud data to be a plane, and calculating a normal vector of the plane corresponding to each point;confirming one or more singularity points in the point cloud data according to the neighborhood point set of each point, the determination parameter, and the normal vector corresponding to each point;correcting the one or more singularity points for the neighborhood point set of each point;obtaining a projection point set of each point by projecting neighborhood points in the corrected neighborhood point set of each point to the plane corresponding to each point, and meshing the projection point set of each point into a plurality of triangles; ...

Подробнее
08-03-2018 дата публикации

SPINE LABELING AUTOMATION

Номер: US20180068067A1
Автор: Bronkalla Mark
Принадлежит:

Carrying forward a spine label between studies is provided. In some embodiments, a first medical image of a subject's spine is provided. With the first image at least one label identifying a feature of the spine is provided. The first medical image is displayed to a user with the at least one label. At least one change is received from the user to the at least one label, yielding at least one updated label. The at least one updated label is applied to a second medical image. A three dimensional representation of the updated label is displayed. 1. A method comprising:providing a first medical image of a subject's spine;providing with the first image at least one label identifying a feature of the spine;displaying to a user the first medical image with the at least one label;receiving from the user at least one change to the at least one label, yielding at least one updated label;applying the at least one updated label to a second medical image of the subject's spine; anddisplaying a three dimensional representation of the updated label.2. The method of claim 1 , wherein providing the first label comprises automatically generating the label based on the first medical image.3. The method of claim 1 , wherein the first image is part of a first study claim 1 , and the second image is part of a second study.4. The method of claim 1 , wherein the first image has a first view claim 1 , and the second image has a second view.5. The method of claim 4 , wherein each of the first and second views is axial claim 4 , sagittal claim 4 , or coronal.6. The method of claim 1 , further comprising storing the updated label.7. The method of claim 6 , wherein the updated label is stored in a 3 dimensional representation.8. The method of claim 1 , further comprising registering the updated label to the second image.9. The method of claim 8 , wherein the registration is non-rigid.10. The method of claim 1 , wherein the at least one label corresponds to at least one vertebra of the spine.11 ...

Подробнее
10-03-2016 дата публикации

BLOCK-BASED BOUNDING VOLUME HIERARCHY

Номер: US20160071312A1
Принадлежит:

A system, method, and computer program product for implementing a tree traversal operation for a tree data structure divided into compression blocks is disclosed. The method includes the steps of receiving at least a portion of a tree data structure that represents a tree having a plurality of nodes, pushing a root node of the tree data structure onto a traversal stack data structure associated with an outer loop of a tree traversal operation algorithm, and, for each iteration of an outer loop of a tree traversal operation algorithm, popping a top element from the traversal stack data structure and processing, via an inner loop of the tree traversal operation algorithm, the compression block data structure that corresponds with the top element. The tree data structure may be encoded as a plurality of compression block data structures that each include data associated with a subset of nodes of the tree. 1. A method , comprising:receiving at least a portion of a tree data structure that represents a tree having a plurality of nodes, the tree data structure encoded as a plurality of compression block data structures stored in a memory, wherein each compression block data structure includes data associated with a subset of nodes of the tree;pushing a root node of the tree data structure onto a traversal stack data structure associated with an outer loop of a tree traversal operation algorithm that is configured, when executed by a processor, to process compression block data structures that are intersected by a query data structure; and popping a top element from the traversal stack data structure that corresponds with a compression block data structure, and', 'processing, via an inner loop of the tree traversal operation algorithm executed by the processor, the compression block data structure that corresponds with the top element., 'for each iteration of the outer loop2. The method of claim 1 , wherein the query data structure comprises a ray data structure that ...

Подробнее
10-03-2016 дата публикации

RELATIVE ENCODING FOR A BLOCK-BASED BOUNDING VOLUME HIERARCHY

Номер: US20160071313A1
Принадлежит:

A system, method, and computer program product for implementing a tree traversal operation for a tree data structure is disclosed. The method includes the steps of receiving at least a portion of a tree data structure that represents a tree having a plurality of nodes and processing, via a tree traversal operation algorithm executed by a processor, one or more nodes of the tree data structure by intersecting the one or more nodes of the tree data structure with a query data structure. A first node of the tree data structure is associated with a first local coordinate system and a second node of the tree data structure is associated with a second local coordinate system, the first node being an ancestor of the second node, and the first local coordinate system and the second local coordinate system are both specified relative to a global coordinate system. 1. A method , comprising:receiving at least a portion of a tree data structure that represents a tree having a plurality of nodes; andprocessing, via a tree traversal operation algorithm executed by a processor, one or more nodes of the tree data structure by intersecting the one or more nodes of the tree data structure with a query data structure,wherein a first local coordinate system associated with a first node of the plurality of nodes and a second local coordinate system associated with a second node of the plurality of nodes are encoded in the tree data structure, wherein the first node is an ancestor of the second node in a hierarchy of the tree data structure, andwherein the first local coordinate system and the second local coordinate system are specified relative to a global coordinate system.2. The method of claim 1 , wherein the tree data structure represents a bounding volume hierarchy (BVH).3. The method of claim 1 , wherein each of the local coordinate systems is encoded using three high-precision values to specify an origin of the local coordinate system relative to the global coordinate system and ...

Подробнее
07-03-2019 дата публикации

TECHNIQUES FOR BUILT ENVIRONMENT REPRESENTATIONS

Номер: US20190073518A1
Принадлежит: Tyco Fire & Security GmbH

Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment. 1. A system for indoor mapping and navigation comprises: process captured data to generate a mapping, the mapping indicating one or more objects in a built environment;', 'recognize the one or more objects in the built environment;', 'identify, from the one or more recognized objects, types of installed devices of interest in a part of the mapping; and', 'integrate the mapping of the identified installed devices of interest into the built environment by combining point cloud data indicating location of the identified installed devices of interest with image data of the built environment., 'one or more computer-readable non-transitory storage media having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to2. The system of claim 1 , further comprising a mobile device including sensors to capture range claim 1 , depth claim 1 , and position data with the mobile device including a depth perception unit claim 1 , a position estimator claim 1 , a heading estimator claim 1 , and an inertial measurement unit (IMU) to process data received by the sensors from the environment.3. The system of claim 1 , wherein the instructions cause the one or more processors to recognize the one or more objects in the built environment ...

Подробнее
17-03-2016 дата публикации

Automated Analytics Systems and Methods

Номер: US20160078673A1
Принадлежит: General Electric Co

An automated analytics system can include a sensor system that obtains measurement data by monitoring one or more parameters at each of a number of locations on each of a number of replicated components of an object. A computing device receives the measurement data from the sensor system and uses the measurement data to automatically generate a computerized representation of each of the plurality of replicated components. Thereafter, upon receipt of an input query, the computing device generates a synthesized representation of the object that is specifically directed to a parameter of interest indicated in the query. The synthesized representation may be displayed in a visual format that is interpretable by a human to derive information associated with the parameter of interest.

Подробнее
15-03-2018 дата публикации

Point cloud data hierarchy

Номер: US20180075645A1
Принадлежит: Willow Garage LLC

One embodiment is directed to a method for presenting views of a very large point data set, comprising: storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; automatically and deterministically organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy.

Подробнее
15-03-2018 дата публикации

POINT CLOUD DATA HIERARCHY

Номер: US20180075646A1
Принадлежит: WILLOW GARAGE, INC.

One method embodiment comprises storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin. 1. A method for presenting multi-resolution views of a very large point data set , comprising:a. storing data on a storage system that is representative of a point cloud comprising a very large number of associated points;b. organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution;c. receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; andd. assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin.2. The method of claim 1 , wherein storing comprises accessing a storage cluster.3. The method of claim 1 , further comprising using a network to ...

Подробнее
24-03-2022 дата публикации

METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR GENERATING A PATH OF AN OBJECT THROUGH A VIRTUAL ENVIRONMENT

Номер: US20220092844A1
Принадлежит:

A method of generating a path of an object through a virtual environment is provided, the method comprising: receiving image data, at a first instance of time, from a plurality of image capture devices arranged in a physical environment; receiving image data, at an at least one second instance of time after the first instance of time, from a plurality of image capture devices arranged in the physical environment; detecting a location of a plurality of points associated with an object within the image data from each image capture device at the first instance of time and the at least one second instance of time; projecting the location of the plurality of points associated with the object within the image data from each image capture device at the first instance of time and the at least one second instance of time into a virtual environment to generate a location of the plurality of points associated with the object in the virtual environment at each instance of time; and generating a path of the object through the virtual environment using the location of the plurality of points associated with the object in the virtual environment, the path being indicative of the position and orientation of the object through the virtual environment. 1. A method of generating a path of an object through a virtual environment , the method comprising:receiving image data, at a first instance of time, from a plurality of image capture devices arranged in a physical environment;receiving image data, at an at least one second instance of time after the first instance of time, from a plurality of image capture devices arranged in the physical environment;detecting a location of a plurality of points associated with an object within the image data from each image capture device at the first instance of time and the at least one second instance of time;projecting the location of the plurality of points associated with the object within the image data from each image capture device at the ...

Подробнее
05-03-2020 дата публикации

REAL-TIME SYSTEM AND METHOD FOR RENDERING STEREOSCOPIC PANORAMIC IMAGES

Номер: US20200074716A1
Автор: KAPLAN ADAM
Принадлежит:

A system and method for rendering stereoscopic panoramas. For example in one embodiment, vertices of geometric primitives are generated for a panoramic image. Vertices of geometric primitives are stored in a 3-D coordinate system. A vertex processor determines a final location, equivalent to latitude and longitude coordinates, for each of the vertices in a panoramic image. A rendering engine renders the panoramic image in accordance with the final location of each of the vertices. 1. A method comprising:receiving vertices of geometric primitives in a 3-D coordinate system;determining a final location, equivalent to latitude and longitude coordinates, for each of the vertices in a panoramic image; andrendering the panoramic image in accordance with the final location of each of the vertices.2. The method of wherein reverse ray casting techniques are implemented to perform the operation of determining the final location.3. The method of further comprising:determining a set of one or more latitude angles and longitude angles based on a first ray projected out of a left virtual camera, the left virtual camera to be combined with a right virtual camera and a virtual center camera on a virtual stereo camera.4. The method of further comprising:rotating the virtual center camera using the set of latitude and longitude angles; anddetermining first normalized device coordinates (NDC) based on an intersection of a second ray projected from a virtual center camera with a cubemap plane for a cubemap representation of the panoramic image.5. The method of further comprising:rotating the stereo camera position by one or more of the longitude angles to determine current stereo camera position; anddetermining second NDCs based on the determined stereo camera position.6. The method of wherein the first NDCs comprise X and Y NDCs and wherein the second NDCs comprise Z and W NDCs.7. The method of wherein the reverse ray casting techniques comprise determining an initial direction or ...

Подробнее
05-03-2020 дата публикации

POINT CLOUD DATA HIERARCHY

Номер: US20200074728A1
Принадлежит: WILLOW GARAGE, INC.

One embodiment is directed to a method for presenting views of a very large point data set, comprising: storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; automatically and deterministically organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy. 1. A method for presenting views of a very large point data set , comprising:a. storing data on a storage system that is representative of a point cloud comprising a very large number of associated points;b. automatically and deterministically organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution;c. receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; andd. assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy.2. The method of claim 1 , wherein storing comprises accessing a storage cluster.3. The method of claim 1 , further comprising using a network to intercouple the storage system claim 1 , controller claim 1 , and user interface.4. The method of claim 3 , wherein at least one portion of the network is accessible to the internet.5. The method of claim 1 , further comprising generating the user interface with a computing system that houses the controller.6. The method of claim 1 , further comprising presenting the user interface to ...

Подробнее
12-06-2014 дата публикации

Apparatus and method for rendering bezier curve

Номер: US20140160125A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus and method for rendering a tile-binned Bezier curve may include a rendering calculator to determine a rendering scheme for at least one tile, with respect to the tile-binned Bezier curve, and a rendering processor to perform rendering with respect to a Bezier curve for the at least one tile, based on the determined rendering scheme. The rendering calculator may suspend the rendering of the Bezier curve at a boundary point between the at least one tile and an adjacent tile while the rendering is being performed, and determine the rendering scheme for a boundary value in which a position of the boundary point is reflected to be used when the adjacent tile is rendered.

Подробнее
18-03-2021 дата публикации

APPARATUS, METHOD AND COMPUTER PROGRAM FOR RENDERING A VISUAL SCENE

Номер: US20210082185A1
Принадлежит:

An apparatus for rendering a visual scene includes: a content visualization stage configured: to obtain as a first input a set of images of one or more objects, and to obtain as a second input a geometry representation of the one or more objects in a 3D-space; to obtain a final image representing the visual scene from a perspective of a target position, the visual scene including the one or more objects; to consider at least one of a lighting effect and/or an object interaction effect between the one or more objects and one or more further objects contained in the visual scene; the content visualization stage is configured to obtain a target view image from the set of images irrespective of the geometry representation. The apparatus is configured to map the target view image on the geometry representation under consideration of the target position. 1. Apparatus for rendering a visual scene , the apparatus comprising: acquire as a first input a set of images of one or more objects, and to acquire as a second input a geometry representation of the one or more objects in a 3D-space, the geometry representation comprising a position information of the one or more objects within the visual scene,', 'acquire a final image representing the visual scene from a perspective of a target position, the visual scene comprising the one or more objects, and', 'consider at least one of a lighting effect and/or an object interaction effect between the one or more objects and one or more further objects comprised by the visual scene,, 'a content visualization stage configured to'} a target view synthesis stage configured to acquire a target view image from the set of images irrespective of the geometry representation, the target view image representing the one or more objects from the perspective of the target position, and', 'a texture mapping block being configured to map the target view image on the geometry representation under consideration of the target position., 'wherein the ...

Подробнее
24-03-2016 дата публикации

Three Dimensional Targeting Structure for Augmented Reality Applications

Номер: US20160086372A1
Принадлежит: Huntington Ingalls Inc

A method is provided for obtaining AR information for display on a mobile interface device. The method comprises placing a three dimensional targeting structure in a target space, the targeting structure comprising a plurality of planar, polygonal facets each having a unique target pattern applied thereto. A position of the targeting structure relative to the target space is then determined. The method further comprises capturing an image of a portion of the target space including the targeting structure and identifying the unique target pattern of one of the plurality facets visible in the captured image. The method also comprises establishing a pose of the mobile interface device relative to the target space using the captured image and the position of the targeting structure, obtaining AR information associated with the unique target pattern of the particular one of the plurality facets, and displaying the AR information on the mobile interface device.

Подробнее
31-03-2022 дата публикации

Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments

Номер: US20220101593A1
Принадлежит:

A computer system, while displaying a first computer-generated experience with a first level of immersion, receives biometric data corresponding to a first user. In response to receiving the biometric data: in accordance with a determination that the biometric data corresponding to the first user meets first criteria, the computer system displays the first computer-generated experience with a second level of immersion, wherein the first computer-generated experience displayed with the second level of immersion occupies a larger portion of a field of view of the first user than the first computer-generated experience displayed with the first level of immersion; and in accordance with a determination that the biometric data corresponding to the first user does not meet the first criteria, the computer system continues to display the first computer-generated experience with the first level of immersion. 1. A method , comprising:{'claim-text': ['displaying a first computer-generated experience with a first level of immersion;', 'while displaying the first computer-generated experience with the first level of immersion, receiving biometric data corresponding to a first user; and', {'claim-text': ['in accordance with a determination that the biometric data corresponding to the first user meets first criteria, displaying the first computer-generated experience with a second level of immersion, wherein the first computer-generated experience displayed with the second level of immersion occupies a larger portion of a field of view of the first user than the first computer-generated experience displayed with the first level of immersion; and', 'in accordance with a determination that the biometric data corresponding to the first user does not meet the first criteria, continuing to display the first computer-generated experience with the first level of immersion.'], '#text': 'in response to receiving the biometric data corresponding to the first user:'}], '#text': 'at a ...

Подробнее
31-03-2022 дата публикации

Inferred shading mechanism

Номер: US20220101597A1
Принадлежит: Intel Corp

An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.

Подробнее
02-04-2015 дата публикации

METHOD AND APPARATUS FOR RENDERING IMAGE DATA

Номер: US20150091892A1
Принадлежит: Samsung Electronics Co., Ltd

Provided is a rendering method and apparatuses for rendering image data. The rendering method includes generating a primitive list by performing geometry processing on a current tile to be rendered; determining whether the current tile is identical to a previous tile from among tiles included in a previously rendered frame; and in response to the previous tile being identical to the current tile, generating an image of the current tile by re-using an image of the previous tile. 1. A rendering method comprising:generating a primitive list by performing geometry processing on a current tile to be rendered;determining whether the current tile is identical to a previous tile from among tiles included in a previously rendered frame; andin response to the previous tile being identical to the current tile, generating, at a pixel processor, an image of the current tile by re-using an image of the previous tile.2. The rendering method of claim 1 , wherein the determining comprises determining that the current tile is identical to the previous tile based on at least one of whether the current tile and the previous tile are included in same render target claim 1 , whether the current tile and the previous tile have same tile attributes claim 1 , or whether the current tile and the previous tile include same primitive list.3. The rendering method of claim 2 , wherein the tile attributes comprises at least one of coordinates or a size of a tile.4. The rendering method of claim 1 , further comprising claim 1 , in response to no previous tile being identical to the current tile claim 1 , generating the image of the current tile by performing pixel processing on the current tile claim 1 , and generating a final image by combining images generated for all tiles included in a current frame.5. The rendering method of claim 1 , wherein the generating of the current image comprises generating of the current image by copying the image of the previous tile to an image area of the current ...

Подробнее
12-03-2020 дата публикации

Methods and Systems for Representing a Scene By Combining Perspective and Orthographic Projections

Номер: US20200082598A1
Принадлежит: VERIZON PATENT AND LICENSING INC

An exemplary virtual reality content generation system manages state data representing a virtual reality scene. Based on the state data, the system generates a scene representation of the virtual reality scene that includes a set of surface data frame sequences each depicting a different projection of the virtual reality scene from a different vantage point. The different projections include a plurality of orthographic projections that are generated based on orthographic vantage points and are representative of a core portion of the virtual reality scene. The different projections also include a plurality of perspective projections that are generated based on perspective vantage points and are representative of a peripheral portion of the virtual reality scene external to the core portion. The system further provides the scene representation to a media player device by way of a network.

Подробнее