Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6355. Отображено 100.
15-03-2012 дата публикации

Method for manipulating a dental virtual model, method for creating physical entities based on a dental virtual model thus manipulated, and dental models thus created

Номер: US20120065952A1
Автор: Avi Kopelman, Eldad Taub
Принадлежит: Align Technology Inc, Cadent Ltd

A 3D virtual model of an intra oral cavity in which at least a part of a finish line of a preparation is obscured is manipulated in virtual space by means of a computer or the like to create, recreate or reconstruct finish line data and other geometrical corresponding to the obscured part. Trimmed virtual models, and trimmed physical models, can then be created utilizing data thus created. The virtual models and/or the physical models may be used in the design and manufacture of copings or of prostheses.

Подробнее
05-04-2012 дата публикации

Methods and apparatus for rendering applications and widgets on a mobile device interface in a three-dimensional space

Номер: US20120081356A1
Принадлежит: SPB Software Inc

A system represents each of the available applications, including widgets, with a respective image representation on a display associated with the communications device. The system associates each of the image representations with a respective subset of image representations, or panels that are organized to assist a user to locate and interact with the image representations. The system arranges the panels in a three dimensional structure, on the display. The three dimensional structure is rendered as a plurality of joined adjacent panels. The system allows the user to access an available application within the three dimensional structure by manipulating the three dimensional structure three dimensionally where the available application are accessed via the respective panel.

Подробнее
02-08-2012 дата публикации

Systems and methods for matching, naming, and displaying medical images

Номер: US20120194540A1
Принадлежит: DR Systems Inc

A method of matching medical images according to user-defined matches rules. In one embodiment, the matched medical images are displayed according user-defined display rules such that the matched medical images may be visually compared in manner that is suitable to the viewer's viewing preferences.

Подробнее
07-02-2013 дата публикации

System and method for animating collision-free sequences of motions for objects placed across a surface

Номер: US20130033501A1
Принадлежит: Autodesk Inc

Embodiments of the invention set forth a technique for animating objects placed across a surface of a graphics object. A CAD application receives a set of motions and initially applies a different motion in the set of motions to each object placed across the surface of the graphics object. The CAD application calculates bounding areas of each object according to the current motion applied thereto, which are subsequently used by the CAD application to identify collisions that are occurring or will occur between the objects. Identified collisions are cured by identifying valid motions in the set of motions that can be applied to a colliding object and then calculating bounding areas for the valid motions to select a valid motion that, when applied to the object, does not cause the object to collide with any other objects.

Подробнее
23-05-2013 дата публикации

Parallax image authoring and viewing in digital media

Номер: US20130127826A1
Принадлежит: Adobe Systems Inc

An authoring tool assigns a first depth value to a first image layer and a second depth value to a second image layer. The first depth value is a first simulated distance from a user. The second depth value is a second simulated distance from the user. The authoring tool composes an image based on the first image layer and the second image layer such that the image is displayed within a page in a scrollable area on a viewing device. The first depth value is utilized to generate a first offset value from a first static position of the first image layer and the second depth value is utilized to generate a second offset value from a second static position of the second image layer based upon a scroll position of the page with respect to a target location in the scrollable area to create a parallax effect.

Подробнее
18-07-2013 дата публикации

Automatic Plane Alignment in 3D Environment

Номер: US20130181971A1
Автор: Eric J. Mueller
Принадлежит: MOTOROLA MOBILITY LLC

In one embodiment, a method determines a first plane in a plurality of planes that is active for placing elements in a three dimensional (3D) space. A changing of a viewing direction of the first plane in the 3D space is detected. The method determines when a second plane in the plurality of planes should be activated for placing elements based on the changing of the viewing direction. The second plane is then activated for placing elements in the 3D space. The second plane is oriented at a different angle than the first plane with respect to the viewing direction.

Подробнее
15-08-2013 дата публикации

Routing virtual area based communications

Номер: US20130212228A1
Принадлежит: Social Communications Co

In association with a virtual area, a first network connection is established with a first network node present in the virtual area and a second network connection is established with a second network node present in the virtual area. Based on stream routing instructions, a stream router is created between the first network node and the second network node. The stream router includes a directed graph of processing elements operable to receive network data, process the received network data, and output the processed network data. On the first network connection, an input data stream derived from output data generated by the first network node is received in association with the virtual area. The input data stream is processed through the stream router to produce an output data stream. On the second network connection, the output data stream is sent to the second network node.

Подробнее
29-08-2013 дата публикации

3d building modeling

Номер: US20130222375A1
Принадлежит: Hover Inc

Embodiments of the invention relate to the visualization of geographical information and the combination of image information to generate geographical information. Specifically, embodiments of the invention relate to a process and system for correlating oblique images data and terrain data without extrinsic information about the oblique imagery. Embodiments include a visualization tool to allow simultaneous and coordinated viewing of the correlated imagery. The visualization tool may also provide distance and measuring, three-dimensional lens, structure identification, path finding, visibility and similar tools to allow a user to determine distance between imaged objects.

Подробнее
26-09-2013 дата публикации

Virtual aligner

Номер: US20130249893A1
Автор: Tarun Mehra
Принадлежит: Individual

The virtual aligner is used to re-align the two virtual components of a three dimensional digital model, such as virtual dental or orthodontic arches. An infinite number of new alignments can be created and saved by the user without altering the original record. The virtual aligner allows the user to move the mandibular virtual arch in relation to a static maxillary virtual arch. Translational movement of the virtual mandible is along each of the x, y, or z axes, left and right, up and down, and back and forth, respectively. Rotational movement of the virtual mandible is around each of the x, y or z axes.

Подробнее
05-12-2013 дата публикации

Sensor-enhanced localization in virtual and physical environments

Номер: US20130321391A1
Принадлежит: Boeing Co

In one embodiment, a computer-based system comprises a measurement device, a display, a processor, and logic instructions stored in a tangible computer-readable medium coupled to the processor which, when executed by the processor, configure the processor to determine a position and orientation in a real three dimensional space of the measurement device relative to at least one real object in the three dimensional space and render on the display, a perspective view of a virtual image of a virtual object corresponding to the real object in a virtual three-dimensional space, wherein the perspective view of the virtual object corresponds to the perspective view of the real object from the position of the measurement device.

Подробнее
06-01-2022 дата публикации

METHOD OF DESIGNING A SKULL PROSTHESIS, AND NAVIGATION SYSTEM

Номер: US20220000555A1
Принадлежит:

Methods and apparatus for designing a skull prosthesis are disclosed. In one arrangement, imaging data from a medical imaging process is received. The imaging data represents the shape of at least a portion of a skull. The imaging data is used to display on a display device a first virtual representation of at least a portion of the skull. User input defining a cutting line in the first virtual representation is received. A surgical operation of cutting through the skull along at least a portion of the defined cutting line to at least partially disconnect a target portion of the skull from the rest of skull is simulated. Output data is provided based on the simulation. The output data represents a simulated shape of at least a portion of the skull with the target portion at least partially disconnected from the rest of the skull, thereby defining the shape of an implantation site for a skull prosthesis to be manufactured. 1. A computer-implemented method of designing a skull prosthesis , comprising:receiving imaging data from a medical imaging process, the imaging data representing the shape of at least a portion of a skull;using the imaging data to display on a display device a first virtual representation of at least a portion of the skull;receiving user input defining a cutting line in the first virtual representation;simulating a surgical operation of cutting through the skull along at least a portion of the defined cutting line to at least partially disconnect a target portion of the skull from the rest of skull;providing output data based on the simulation, the output data representing a simulated shape of at least a portion of the skull with the target portion at least partially disconnected from the rest of the skull, thereby defining the shape of an implantation site for a skull prosthesis to be manufactured.2. The method of claim 1 , wherein the output data comprises a modified version of the received imaging data.3. The method of claim 2 , wherein the ...

Подробнее
02-01-2020 дата публикации

DENTAL ARCH WIDTH MEASUREMENT TOOL

Номер: US20200000554A1
Принадлежит:

Systems and methods for rapidly and reliably determining an arch with of a patient's dental arch. A patient's dentition may be scanned and/or segmented. Arch width may be determined between points of intersection on the occlusal surface and a long axis of each tooth between one or more of: canine, first bicuspid, first primary molar, second bicuspid, second primary molar, and permanent first molar. Arch widths of different modified versions of the patient's dentition may be dynamically compared the patient's starting dentition, or to each other, and may be dynamically updated as the user modifies or switches between one or more 3D models of the patient's dentition. 1. A computer-implemented method comprising:identifying a first portion of a three-dimensional (3D) model of a patient's dentition corresponding to a first target tooth, the first portion of the 3D model of the patient's dentition being associated with one or more attributes of the target tooth;identifying a second portion of the 3D model of the patient's dentition corresponding to a second target tooth opposing the first target tooth, wherein the second portion of the 3D model of the patient's dentition is associated with one or more attributes of the second target tooth;determining an arch width between the one or more attributes of the first target tooth and the one or more attributes of the second target tooth; andoutputting the arch width.2. The computer-implemented method of claim 1 , wherein outputting the arch width comprises overlaying a graphic with the arch width on the 3D model of the patient's dentition.3. The computer-implemented method of claim 1 , further comprising one or more of: taking the 3D model of the patient's teeth claim 1 , receiving the 3D model of the patient's teeth from an intraoral scanner claim 1 , and receiving the 3D model from a scan of a mold of the patient's teeth.4. The computer-implemented method of claim 1 , wherein the target tooth comprises a tooth selected from ...

Подробнее
13-01-2022 дата публикации

Devices and methods for anatomic mapping for prosthetic implants

Номер: US20220008134A1
Принадлежит: Aortica Corp

A method of generating a patient-specific prosthetic includes receiving anatomic imaging data representative of a portion of a patient's anatomy. A first digital representation of the anatomic imaging data is defined. The first digital representation of the anatomic imaging data is modified. A second digital representation of the portion of the patient's anatomy is defined based on the modifying of the first digital representation of the anatomic imaging data. A patient-specific prosthetic template of the portion of the patient's anatomy is generated based at least in part on the second digital representation of the anatomic imaging data.

Подробнее
07-01-2016 дата публикации

IMAGE PROCESSOR WITH EVALUATION LAYER IMPLEMENTING SOFTWARE AND HARDWARE ALGORITHMS OF DIFFERENT PRECISION

Номер: US20160004919A1
Принадлежит:

An image processor comprises image processing circuitry implementing a plurality of processing layers including at least an evaluation layer and a recognition layer. The evaluation layer comprises a software-implemented portion and a hardware-implemented portion, with the software-implemented portion of the evaluation layer being configured to generate first object data of a first precision level using a software algorithm, and the hardware-implemented portion of the evaluation layer being configured to eV generate second object data of a second precision level lower than the first precision level using a hardware algorithm. The evaluation layer further comprises a signal combiner configured to combine the first and second object data to generate output object data for delivery to the recognition layer. By way of example only, the evaluation layer may be implemented in the form of an evaluation subsystem of a gesture recognition system of the image processor. 1. An image processor comprising:image processing circuitry implementing a plurality of processing layers including at least an evaluation layer and a recognition layer;the evaluation layer comprising a software-implemented portion and a hardware-implemented portion;the software-implemented portion of the evaluation layer being configured to generate first object data of a first precision level using a software algorithm;the hardware-implemented portion of the evaluation layer being configured to generate second object data of a second precision level lower than the first precision level using a-hardware algorithm;wherein the evaluation layer further comprises a signal combiner configured to combine the first and second object data to generate output object data for delivery to the recognition layer.2. The image processor of wherein the evaluation layer comprises an evaluation subsystem of a gesture recognition system.3. The image processor of wherein the plurality of processing layers further comprises a ...

Подробнее
13-01-2022 дата публикации

Annotation using a multi-device mixed interactivity system

Номер: US20220011924A1
Принадлежит: Microsoft Technology Licensing LLC

In various embodiments, methods and systems for implementing a multi-device mixed interactivity system are provided. The interactivity system includes paired mixed-input devices for interacting and controlling virtual objects. In operation, a selection profile associated with a virtual object is accessed. The selection profile is generated based on a selection input determined using real input associated with a selection device and virtual input associated with a mixed-reality device. The selection device has a first display and the mixed-reality device has a second display that both display the virtual object. An annotation input for the virtual object based on a selected portion corresponding to the selection profile is received. An annotation profile based on the annotation input is generated. The annotation profile includes annotation profile attributes for annotating a portion of the virtual object. An annotation of the selected portion of the virtual reality object is caused to be displayed.

Подробнее
07-01-2016 дата публикации

Service provision program

Номер: US20160005177A1
Автор: Sokichi Fujita
Принадлежит: Fujitsu Ltd

A non-transitory recording medium storing a program that causes a computer to execute a process, the process including: generating a modified image by executing modification processing on an image of a mark affixed to a product; and providing the generated modified image as a determination-use image employable in determination as to whether or not the product affixed with the mark is included in a captured image.

Подробнее
07-01-2016 дата публикации

SYSTEM AND METHOD FOR SEGMENTATION OF LUNG

Номер: US20160005193A1
Принадлежит:

Disclosed are systems, devices, and methods for determining pleura boundaries of a lung, an exemplary method comprising acquiring image data from an imaging device, generating a set of two-dimensional (2D) slice images based on the acquired image data, determining, by a processor, a seed voxel in a first slice image from the set of 2D slice images, applying, by the processor, a region growing process to the first slice image from the set of 2D slice images starting with the seed voxel using a threshold value, generating, by the processor, a set of binarized 2D slice images based on the region grown from the seed voxel, filtering out, by the processor, connected components of the lung in each slice image of the set of binarized 2D slice images, and identifying, by the processor, the pleural boundaries of the lung based on the set of binarized 2D slice images. 1. A segmentation method for determining pleura boundaries of a lung , comprising:acquiring image data from an imaging device;generating a set of two-dimensional (2D) slice images based on the acquired image data;determining, by a processor, a seed voxel in a first slice image from the set of 2D slice images;applying, by the processor, a region growing process to the first slice image from the set of 2D slice images starting with the seed voxel using a threshold value;generating, by the processor, a set of binarized 2D slice images based on the region grown from the seed voxel;filtering out, by the processor, connected components of the lung in each slice image of the set of binarized 2D slice images; andidentifying, by the processor, the pleural boundaries of the lung based on the set of binarized 2D slice images.2. The segmentation method according to claim 1 , wherein the seed voxel is in a portion of the first slice image from the set of binarized 2D slice images corresponding to a trachea of the lung.3. The segmentation method according to claim 1 , wherein the threshold value is greater than or equal to an ...

Подробнее
03-01-2019 дата публикации

Position Determination and Alignment of a Virtual Reality Headset and Fairground Ride with a Virtual Reality Headset

Номер: US20190004598A1
Принадлежит: VR Coaster GmbH & Co. KG

A method for determining a position and to a method for aligning at least one virtual reality headset in amusement rides. The virtual reality headset is a mobile virtual reality headset and has at least one receiver or at least one apparatus. The receiver receives a position signal of a position transmitter as a received signal, and the apparatus receives an alignment signal of an alignment transmitter. The disclosure additionally relates to an amusement ride with which a method according to the disclosure can be carried out. 1. A method for determining a position of a virtual reality headset in a vehicle moving along a travel stretch of a fairground ride , comprising:receiving a passenger on whom the virtual reality headset is placed during a trip with the vehicle,generating, during an operation of the fairground ride in an actual reality, a virtual reality corresponding to the fairground ride with the vehicle, wherein the virtual reality is represented on the virtual reality headset,wherein the fairground ride comprises a sensor from which a position signal emanates,wherein the virtual reality headset is a mobile virtual reality headset,wherein the virtual reality headset comprises a receiver which evaluates the position signal of the sensor for the determination of the position of the virtual reality headset in the vehicle relative to the sensor,and the virtual reality headset comprises a communication interface via which a data connection takes place between the virtual reality headset and a data processing device of the fairground ride, wherein the communication interface is detachable, wireless, or a combination thereof.2. The method according to claim 1 , further comprising determining the position and an alignment of the virtual reality headset by utilizing a motion capture system.3. The method according to claim 1 , wherein the sensor is a position sensor and an alignment sensor.4. The method according to claim 1 , wherein the fairground ride comprises at ...

Подробнее
07-01-2016 дата публикации

DYNAMIC 3D LUNG MAP VIEW FOR TOOL NAVIGATION INSIDE THE LUNG

Номер: US20160005220A1
Принадлежит:

A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model. 1. A method for implementing a dynamic three-dimensional (3D) lung map view for navigating a prove inside a patient's lungs , the method comprising:loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images;inserting the probe into a patient's airways, the probe including a location sensor in operative communication with the navigation system;registering a sensed location of the probe with the planned pathway;selecting a target in the navigation plan;presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe;navigating the probe through the airways of the patient's lungs toward the target;iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe; andupdating the presented view by removing at least a part of an object forming part of the 3D model.2. The method according to claim 1 , wherein iteratively adjusting the presented view of the 3D model includes zooming in when the probe approaches the ...

Подробнее
07-01-2016 дата публикации

Three-dimensional information processing device

Номер: US20160005222A1
Принадлежит: Mitsubishi Electric Corp

A three-dimensional information processing apparatus includes: a bottom surface intersection point calculation unit calculating intersection points between segments formed by a set of vertices for a terrain model generated by a terrain vertex synthesis unit and outer peripheral segments of a bottom surface of a structure model extracted by a structure bottom surface extraction unit; an all-point height calculation unit calculating heights of the terrain model at the intersection points calculated by the calculation unit and at the vertices constituting the bottom surface of the structure model; a reference height calculation unit calculating a reference height in three-dimensional information on a predetermined area from the heights of the terrain model; and a structure height correction unit correcting the height of the structure model in use of differences between the reference height and the heights of the terrain model.

Подробнее
07-01-2016 дата публикации

METHOD AND SYSTEM FOR AUTOMATICALLY ALIGNING MODELS OF AN UPPER JAW AND A LOWER JAW

Номер: US20160005237A1
Принадлежит:

A method for automatically aligning a model for an upper jaw with a model for a lower jaw, the method including forming models for teeth of the upper jaw and the lower jaw based on images; obtaining a reference bite frame with the teeth in a clenched state; aligning the models for the teeth of the upper jaw and the lower jaw with the reference bite frame, respectively, to determine transform information between the generated models and the reference bite frame; aligning the model for the teeth of the upper jaw with that of the lower jaw based on the determined transform information. 1. A method for automatically aligning a model for an upper jaw with a model for a lower jaw , including:a. forming a model for teeth of the upper jaw based on respective images;b. forming a model for teeth of the lower jaw based on respective images;c. obtaining a reference bite frame with the teeth of the upper jaw and lower jaw in a clenched state;d. aligning the model for the teeth of the upper jaw and the model for the teeth of the lower jaw with the reference bite frame, respectively, to determine transform information between the generated models and the reference bite frame;e. aligning the model for the teeth of the upper jaw with the model for the teeth of the lower jaw based on the determined transform information.2. The method of claim 1 , wherein the step a includesi. reconstructing three dimensional surfaces for the teeth of the upper jaw from the respective images;ii. generating a model for the teeth of the upper jaw from the reconstructed three dimensional surfaces for the teeth of the upper jaw; and wherein the step b includesiii. reconstructing three dimension surfaces for the teeth of the lower jaw from the respective images;iv. generating a model for the teeth of the lower jaw from the reconstructed three dimensional surfaces for the teeth of the lower jaw.3. The method of claim 1 , wherein the step c includes:capturing images for a part of all teeth;reconstructing ...

Подробнее
03-01-2019 дата публикации

Annotation using a multi-device mixed interactivity system

Номер: US20190004684A1
Принадлежит: Microsoft Technology Licensing LLC

In various embodiments, methods and systems for implementing a multi-device mixed interactivity system are provided. The interactivity system includes paired mixed-input devices for interacting and controlling virtual objects. In operation, a selection profile associated with a virtual object is accessed. The selection profile is generated based on a selection input determined using real input associated with a selection device and virtual input associated with a mixed-reality device. The selection device has a first display and the mixed-reality device has a second display that both display the virtual object. An annotation input for the virtual object based on a selected portion corresponding to the selection profile is received. An annotation profile based on the annotation input is generated. The annotation profile includes annotation profile attributes for annotating a portion of the virtual object. An annotation of the selected portion of the virtual reality object is caused to be displayed.

Подробнее
13-01-2022 дата публикации

Electronic device and control method thereof

Номер: US20220012940A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An electronic device is disclosed. The present electronic device includes a display, a processor electronically connected to the display so as to control the display, and a memory electronically connected to the processor. The memory stores instructions causing the processor to control the display to display a 3D modeling image acquired by applying an input 2D image to a learning network model configured to convert the input 2D image into a 3D modeling image, and the learning network model is obtained by learning using a 3D pose acquired by rending virtual 3D modeling data and a 2D image corresponding to the 3D pose.

Подробнее
13-01-2022 дата публикации

METHOD, DEVICE AND COMPUTER PROGRAM FOR GENERATING A VIRTUAL SCENE OF OBJECTS

Номер: US20220012955A1
Принадлежит: Inter IKEA Systems B.V.

The present disclosure relates to the field of image analysis, in particular, it relates to a method for generating a virtual scene of objects. The disclosure also relates to a device comprising circuitry configured to carry out the method. The disclosure also relates to a computer program product adapted to carry out the method. 1. A method comprising:identifying one or more objects within a virtual scene;for each object of the one or more objects, determining a subspace within a 3D coordinate space of the virtual scene used by a body of the object;defining empty spaces within the 3D coordinate space;receiving a further object to be placed in one empty space of the empty spaces;determine that a body of the further object does not fit within the one empty space; and identifying at least one of a line or a surface in the virtual scene based on the 3D coordinate space;', 'moving the at least one object along at least one of the line or the surface identified in the 3D coordinate space to increase a size of the one empty space, such that the further object fits in the one empty space; and', 'placing the further object in the one empty space., 'when determined that the body does not fit, rearranging at least one object of the one or more objects in the 3D coordinate space by2. The method of claim 1 , wherein identifying the one or more objects within the virtual scene further comprises applying an image segmentation algorithm to image data of the virtual scene.3. The method of claim 1 , wherein rearranging the at least one object further comprises:identifying one or more surfaces in the virtual scene by analyzing the 3D coordinate space; andmoving the at least one object along the surface in the 3D coordinate space.4. The method of claim 1 , the method further comprising:prior to defining empty spaces, removing a particular object of the one or more objects.5. The method of claim 4 , the method further comprising:determining an object type for the particular object, and ...

Подробнее
07-01-2021 дата публикации

METHOD AND DEVICE FOR DETERMINING THE AMPLITUDE OF A MOVEMENT PERFORMED BY A MEMBER OF AN ARTICULATED BODY

Номер: US20210004968A1
Принадлежит:

A method for determining the amplitude of a movement performed by a member of an articulated body comprises: 1. A method for determining an amplitude of a movement performed by a member of an articulated body , said method comprising:obtaining a segment representative of a positioning of the member in a specific reference frame at an end of said movement;generating a three-dimensional model of the member, positioned in said specific reference frame using obtained segment;obtaining a cloud of three-dimensional points representing the member in said specific reference frame at the end of said movement, based on depth information provided by a sensor, said depth information defining a three-dimensional scene comprising at least a part of the articulated body including said member;repositioning the three-dimensional model of the member to minimize a predetermined error criterion between the obtained cloud of three-dimensional points and said three-dimensional model, thereby obtaining a new positioning of the three-dimensional model of the member; anddetermining the amplitude of the movement, based on the new positioning of the three-dimensional model of the member.2. The method according to claim 1 , wherein obtaining the segment representative of the member comprises estimating a skeleton of the articulated body claim 1 , said obtained segment corresponding to a part of the estimated skeleton.3. The method according to claim 2 , wherein obtaining the segment representative of the member comprises:obtaining a two-dimensional image representing at least a part of the articulated body including said member,estimating the skeleton that is a two-dimensional skeleton of the articulated body, based on the obtained two-dimensional image, anddetecting two-dimensional points characterizing the member, using the estimated two-dimensional skeleton, said obtained segment corresponding to a two-dimensional segment linking said two-dimensional points.4. The method according to claim ...

Подробнее
07-01-2021 дата публикации

MIXED REALITY SYSTEM USER INTERFACE PLACEMENT

Номер: US20210005021A1
Принадлежит: Microsoft Technology Licensing, LLC

A mixed reality display system determines a shared coordinate system that is understood by a mixed reality application running on the mixed reality display system and an operating system of the mixed reality display system. The operating system can display a system user interface (UI) element in a mixed reality environment. The system UI element can be displayed at a location in a mixed reality environment. The location is specified by the mixed reality application according to the shared coordinate system. A size and orientation for displaying the system UI element may also be specified. Also, the location, size and orientation may be specified through application program interfaces (API) of the operating system. API calls may be made per frame to adjust the location, size or orientation per frame of the displayed mixed reality environment. 1. A computing device for a mixed reality display system , the computing device comprising:at least one memory storing machine readable instructions; andat least one processor to execute the machine readable instructions to:determine a shared coordinate system in a mixed reality environment, wherein the shared coordinate system is shared between a mixed reality application and an operating system;receive, from the mixed reality application, a request to place a system UI element at a location for each frame of a plurality of frames of the mixed reality environment, wherein the location is expressed according to the shared coordinate system; andresponsive to the request for each frame, place, by the operating system, the system UI element at the location in each frame.2. The computing device of claim 1 , wherein the at least one processor is to execute the machine readable instructions to:determine an orientation for placing the system UI element at the location based on orientation information provided by the mixed reality application; andcause, by the operating system, the system UI element to be displayed at the location in ...

Подробнее
07-01-2021 дата публикации

Automatic placement and arrangement of content items in three-dimensional environment

Номер: US20210005025A1
Принадлежит: Microsoft Technology Licensing LLC

Computing devices for automatic placement and arrangement of objects in computer-based 3D environments are disclosed herein. In one embodiment, a computing device is configured to provide, on a display, a user interface containing a work area having a template of a 3D environment and a gallery containing models of two-dimensional (2D) or 3D content items. The computing device can then detect, via the user interface, a user input selecting one of the models from the gallery to be inserted as an object into the template of the 3D environment. In response to detecting the user input, the computing device can render and surface on the display, a graphical representation of the 2D or 3D content item corresponding to the selected model at a location along a circular arc spaced apart from the default viewer position of a viewer of the 3D environment by a preset radial distance.

Подробнее
07-01-2021 дата публикации

CREATION OF A SIMULATION SCENE FROM A SPECIFIED VIEW POINT

Номер: US20210005026A1
Автор: LESBORDES Rémi
Принадлежит:

A creation of simulation scenes from a specified view point is provided. The method consists in obtaining digital photographic images of the scene from the specified view point, detecting objects in the digital photographic images, extracting masks of the object, and associating a distance to the digital photographic image, and a lower distance to the object. The scene thus created provides a photorealistic scene wherein 3D objects can be inserted. According to the distances of the 3D objects, they can be displayed behind or beyond the masks, but always behind the digital photographic images that defines the background of the scene. 1. A method comprising:obtaining at least one digital photographic image of a view of a 3D real space;obtaining a position and an orientation of the digital photographic image relative to a specified view point;extracting from the at least one digital photographic image at least one mask representing at least one object having a specified position in the 3D real space in the at least one digital photographic image;associating to the mask an object distance between the object and the specified view point;associating to the digital photographic image a distance higher than the object distance;creating a digital simulation scene comprising the digital photographic image and the mask.2. The method of claim 1 , wherein obtaining the orientation of the digital photographic image relative to the specified view point comprises:retrieving at least geographical coordinates of at least one fixed element from a database;detecting at least one position of the fixed element of said set in the digital photographic image; geographical coordinates of said fixed element;', 'geographical coordinates of the specified view point;', 'said position of said fixed element in the digital photographic image., 'detecting an orientation of the digital photographic image according to3. The method of claim 1 , wherein obtaining the orientation of the digital ...

Подробнее
04-01-2018 дата публикации

Visual positioning device and three-dimensional surveying and mapping system and method based on same

Номер: US20180005457A1
Автор: Zheng Qin
Принадлежит: Beijing Antvr Technology Co ltd

Disclosed are a visual positioning device ( 101 ) and a three-dimensional surveying and mapping system ( 100 ) including at least one visual positioning device ( 101 ). The visual positioning device ( 101 ) includes an infrared light source ( 101 b ), an infrared camera ( 101 a ), a signal transceiver module ( 101 d ) and a visible light camera ( 101 c ). The three-dimensional surveying and mapping system ( 100 ) further includes a plurality of position identification points ( 102 ), a plurality of active signal points ( 103 ) and an image processing server ( 104 ). The image processing server ( 104 ) is configured to cache infrared images and real scene images shot by the infrared camera ( 101 a ) and the visible light camera ( 101 c ) and positioning information thereabout and store a three-dimensional model obtained through reconstruction. The present invention has the advantages of simple structure, no need for a power supply, convenience in use and high precision, etc.

Подробнее
02-01-2020 дата публикации

Remote Collaboration Methods and Systems

Номер: US20200005538A1
Автор: Neeter Eduardo J.
Принадлежит:

Apparatus and associated methods relate to immersive collaboration based on configuring a real scene VRE operable from a real scene and a remote VRE operable remote from the real scene with an MR scene model of the real scene, creating an MR scene in each of the real scene VRE and remote VRE based on augmenting the MR scene model with an object model, calibrating the remote MR scene to correspond in three-dimensional space with the real scene MR scene model, and automatically providing immersive collaboration based on the MR scene in the remote VRE and updating the real scene VRE with changes to the remote VRE. In an illustrative example, the MR scene model of the real scene may be determined as a function of sensor data scanned from the real scene. In some embodiments, the MR scene model may be augmented with an object model identified from the real scene. The object model identified from the real scene may be, for example, selected from a known object set based on matching sensor data scanned from the real scene with an object from a known object set. In some embodiments, the remote MR scene may be calibrated based on applying a three-dimensional transform calculated as a function of the real MR scene and remote MR scene geometries. Some designs may recreate a subset of the real scene in the remote VRE and update the real scene VRE with changes to the remote VRE. Various embodiments may advantageously provide seamless multimedia collaboration based on updates to the remote VRE in response to physical changes to the real scene, and updating the real scene VRE in response to changes in the remote VRE. 1. A process to provide immersive collaboration , comprising:configuring a real scene VRE operable from a real scene and a remote VRE operable remote from the real scene with an MR scene model of the real scene;creating an MR scene in each of the real scene VRE and remote VRE based on augmenting the MR scene model with an object model;calibrating the remote MR scene to ...

Подробнее
02-01-2020 дата публикации

INTRAORAL SCANNING USING ULTRASOUND AND OPTICAL SCAN DATA

Номер: US20200005552A1
Автор: Furst Gilad
Принадлежит:

A first multitude of intraoral images of a first portion of a three-dimensional intraoral object are received. A pre-existing model that corresponds to the three-dimensional intraoral object is identified. A first intraoral image of the first multitude of intraoral images is registered to a first portion of the pre-existing model. A second intraoral image of the first multitude of intraoral images is registered to a second portion of the pre-existing model. 1. A method comprising:receiving, by a processing device, a first plurality of intraoral images of a first portion of a three-dimensional intraoral object;identifying a pre-existing model that corresponds to the three-dimensional intraoral object;registering a first intraoral image of the first plurality of intraoral images to a first portion of the pre-existing model; andregistering a second intraoral image of the first plurality of intraoral images to a second portion of the pre-existing model.2. The method of claim 1 , further comprising:generating a virtual model of the three-dimensional intraoral object based on the registration of the first intraoral image and the second intraoral image to the pre-existing model.3. The method of claim 1 , wherein: determining first rotations and translations based on the registering the first intraoral image to the first portion of the pre-existing model; and', 'registering the second intraoral image of the first plurality of intraoral images using the second portion of the pre-existing model comprises:', 'determining second rotations and translations based on the registering the second intraoral image to the second portion of the pre-existing model., 'registering the first intraoral image of the first plurality of intraoral images using the first portion of the pre-existing model comprises4. The method of claim 3 , further comprising:applying the first rotations and translations associated with the first intraoral image to a third intraoral image of a second plurality of ...

Подробнее
03-01-2019 дата публикации

METHOD AND APPARATUS FOR CALCULATING A 3D DENSITY MAP ASSOCIATED WITH A 3D SCENE

Номер: US20190005736A1
Принадлежит:

The present disclosure relates to methods, apparatus or systems for calculating a 3D density map () for a 3D scene () in which significant objects have been annotated and associated with a significance weight. The 3D density map is computed in function of the location of the significant objects and the location of at least one virtual camera in the 3D scene. The space of the 3D scene is split in regions and a density is computed for each regions according to the significant weights. The 3D density map is transmitted to an external module configured to reorganize the scene according to the 3D density map. 1. A method of reorganizing a second object of a 3D scene comprising a first object , said second object being an animated object or an animated volume , the method comprising:determining a first region of a 3D space of the scene that is situated between a virtual camera of the 3D scene and the first object,determining a second region of the 3D space that is the complementary of the first regions,associating first density value with the first region and a second density value with the second region, the first density value being smaller than or equal to the second density value, andreorganizing said second object by minimizing an occupation density of said second object in regions with a low density.2. The method according to claim 1 , further comprising determining a third region within said first region claim 1 , the third region being the part of the first region in a field of view of said virtual camera and determining a fourth region that is the complementary of the third region within said first region claim 1 , a third density value being associated with the third region and a fourth density value being associated with the fourth region claim 1 , the third density value being smaller than or equal to the first density value and the fourth density value being greater than or equal to the first density value and smaller than or equal to the second density value ...

Подробнее
20-01-2022 дата публикации

SYSTEM AND METHOD FOR GENERATING HIERARCHICAL LEVEL-OF-DETAIL MEASUREMENTS FOR RUNTIME CALCULATION AND VISUALIZATION

Номер: US20220020225A1
Принадлежит:

Systems, methods, devices, and non-transitory media of the various embodiments enable generating at least one hierarchical-level-of-detail (LOD) data structure in order to visualize and traverse measurement data associated with a three-dimensional (3D) model. In various embodiments, generating at least one hierarchical LOD data structure may include establishing a background grid comprising a mathematical grid structure defined in a common coordinate system, building a layout comprising an intermediary data structure, computing measurement data for each tile based at least in part on the height data samples, and storing at least a portion of the computed measurement data for each tile in a metadata file. 1. A method for generating at least one hierarchical-level-of-detail (LOD) data structure for measurement data associated with a three-dimensional (3D) model , the method comprising:establishing a background grid comprising a mathematical grid structure defined in a common coordinate system;building a layout comprising an intermediary data structure, wherein the layout provides a grid view and a hierarchical view of tiles, wherein each tile represents a rectangle of height data samples aligned with the background grid;computing measurement data for each tile based at least in part on the height data samples; andstoring at least a portion of the computed measurement data for each tile in a metadata file.2. The method of claim 1 , wherein the generated at least one hierarchical LOD data structure is configured to enable visualizing the computed measurements in a 3D rendering engine.3. The method of claim 1 , wherein the at least one hierarchical LOD data structure comprises a plurality of LOD data structures that are configured to be aligned claim 1 , the method further comprising:comparing the plurality of LOD data structures; andcomputing derived measurements from the plurality of LOD data structures.4. The method of claim 1 , wherein computing the measurement data ...

Подробнее
08-01-2015 дата публикации

TERMINAL DEVICE, IMAGE SHOOTING SYSTEM AND IMAGE SHOOTING METHOD

Номер: US20150009297A1
Автор: Mikuni Shin
Принадлежит:

A terminal device used for stereo imaging includes: an image shooting unit; a communication unit that receives a first image of a first angular field from an external terminal device; and a determination unit that determines the image shooting range relationship between the first image received by the communication unit and a second image of a second angular field shot by the image shooting unit, the second angular field being wider than the first angular field. 1. A terminal device comprising:an image shooting unit;a communication unit that receives a first image of a first angular field from an external terminal device; anda determination unit that determines an image shooting range relationship between the first image received by the communication unit and a second image of a second angular field shot by the image shooting unit, wherein the second angular field is wider than the first angular field.2. The terminal device according to claim 1 , wherein the determination unit detects an area corresponding to the first image from the second image claim 1 , and based on the position of the detected area on the second image claim 1 , determines the image shooting range relationship between the first and second images.3. The terminal device according to claim 2 , further comprising a display unit claim 2 , wherein the determination unit causes the display unit to superpose and display an image that indicates the range of the detected area and the second image.4. The terminal device according to claim 2 , wherein the determination unit causes the communication unit to transmit a notice indicating the determination result of the image shooting range relationship between the first and second images to the external terminal device.5. The terminal device according to claim 2 , further comprising a clipping unit that clips the detected area from the second image claim 2 ,wherein the determination unit causes the communication unit to transmit the image clipped by the ...

Подробнее
08-01-2015 дата публикации

METHOD AND APPARATUS FOR MEASURING THE THREE DIMENSIONAL STRUCTURE OF A SURFACE

Номер: US20150009301A1
Принадлежит:

A method includes imaging a surface with at least one imaging sensor, wherein the surface and the imaging sensor are in relative translational motion. The imaging sensor includes a lens having a focal plane aligned at a non-zero angle with respect to an x-y plane of a surface coordinate system. A sequence of images of the surface is registered and stacked along a z direction of a camera coordinate system to form a volume. A sharpness of focus value is determined for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction of the camera coordinate system. Using the sharpness of focus values, a depth of maximum focus zalong the z direction in the camera coordinate system is determined for each (x,y) location in the volume, and based on the depths of maximum focus z, a three dimensional location of each point on the surface may be determined. 115-. (canceled)16. An apparatus , comprising:an imaging sensor comprising a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof;a processor that:aligns in each image in the sequence a reference point on the surface to form a registered sequence of images;stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume;computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system;{'sub': 'm', 'computes, based on the sharpness of focus values, a depth of maximum focus value zfor each pixel within the volume;'}{'sub': 'm', 'determines, based on the depths of maximum focus z, a three dimensional location of ...

Подробнее
14-01-2021 дата публикации

METHOD FOR EVALUATING A DENTAL SITUATION WITH THE AID OF A DEFORMED DENTAL ARCH MODEL

Номер: US20210007834A1
Принадлежит:

Method for evaluating a dental situation of a patient. The method having the following successive steps: 1) generating an initial model of at least one dental arch of the patient, preferably by means of a scanner; 2) splitting the initial model in order to define a tooth model for at least some of the teeth represented on the initial model and thereby to obtain a split model; 3) determining an initial support curve of the tooth models in the split model; 4) fixing each tooth model virtually on the initial support curve, preferably by computer; 5) modifying the split model by deformation of the initial support curve according to a deformed support curve, so as to obtain a first deformed model, in which the tooth models are aligned according to the deformed support curve; 6) presenting the first deformed model. 1. Method for evaluating a dental situation of a patient , said method having the following successive steps:1) generating an initial model of at least one dental arch of the patient;2) splitting the initial model in order to define a tooth model for each of at least some of the teeth represented on the initial model and thereby to obtain a split model;3) defining an initial support curve in the split model;4) fixing each tooth model virtually to the initial support curve;5) modifying the split model by deformation of the initial support curve until a deformed support curve is obtained, so as to obtain a first deformed model.2. Method according to claim 1 , in which claim 1 , in step 1) claim 1 , the initial model is generated by means of a scanner.3. Method according to claim 1 , in which claim 1 , in step 3) claim 1 , the tooth models are aligned according to the initial support curve.4. Method according to claim 1 , having a step 6) claim 1 , subsequent to step 5) claim 1 , in whichthe first deformed model is presented to an operator, and/orone or more measurements of size or appearance, in particular of color, are taken on the first deformed model, and/ ...

Подробнее
10-01-2019 дата публикации

Registration of a surgical image acquisition device using contour signatures

Номер: US20190008592A1
Принадлежит: Koninklijke Philips NV

Registration of a surgical image acquisition device (e.g. an endoscope) using preoperative and live contour signatures of an anatomical object is described. A control unit includes a processor configured to compare the real-time contour signature to the database of preoperative contour signatures of the anatomical object to generate a group of potential contour signature matches for selection of a final contour match. Registration of an image acquisition device to the surgical site is realized based upon an orientation corresponding to the selected final contour signature match.

Подробнее
27-01-2022 дата публикации

METHOD FOR FORMING WALLS TO ALIGN 3D OBJECTS IN 2D ENVIRONMENT

Номер: US20220027524A1
Автор: Jovanovic Milos
Принадлежит:

Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2) environment. The method may include capturing the 2D environment and adding scale and perspective to the 2D environment. Further, a user may select intersection points on a ground plane of the 2D environment to form walls, thereby converting the 2D environment into a 3D space. The user may further add 3D models of objects on the wall plane such that the objects may remain flush with the wall plane. 1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment , the method comprising:receiving, with a processor via a user interface, from a user, a ground plane input comprising a plurality of ground plane points selected by the user to define a ground plane corresponding to a horizontal plane of the two-dimensional environment;automatically generating, with the processor, and displaying, via a display unit, a three-dimensional environment for the two-dimensional environment based on the ground plane input;automatically generating, with the processor, and displaying, via the display unit, a wall plane, representing a vertical plane of the two-dimensional environment orthogonal to the horizontal plane, in the three-dimensional environment positioned at at least two wall-floor intersection points selected by the user; andsuperimposing, with the processor, and displaying, via the display unit, the three-dimensional model of the object on the three-dimensional environment for the two-dimensional environment based on the ground plane input and the wall-floor intersection points.2. The method of claim 1 , further comprising:receiving, with the processor via the user interface, from the user, input comprising a selection of a wall-hidden surface intersection point on the two-dimensional environment, the wall-hidden surface intersection point indicating a second plane behind the wall plane;automatically generating, ...

Подробнее
27-01-2022 дата публикации

OVERLAYING 3D AUGMENTED REALITY CONTENT ON REAL-WORLD OBJECTS USING IMAGE SEGMENTATION

Номер: US20220028178A1
Принадлежит: Capital One Services, LLC

Various embodiments are generally directed to techniques of overlaying a virtual object on a physical object in augmented reality (AR). A computing device may receive one or more images of the physical object, perform analysis on the images (such as image segmentation) to generate a digital outline, and determine a position and a scale of the physical object based at least in part on the digital outline. The computing device may configure (e.g., rotate, scale) a 3D model of the physical object to match the determined position and scale of the physical object. The computing device may place or overlay a 3D virtual object on the physical object in AR based on a predefined location relation between the 3D virtual object and the 3D model of the physical object, and further, generate a composite view of the placement or overlay. 1. A computer-implemented method , comprising:determining, by a processor, a position and a scale of a physical object depicted in an image from a viewpoint;identifying, by the processor, a symmetry-based mismatch between a three-dimensional (3D) model of the physical object and a digital outline of the physical object at the viewpoint, the symmetry-based mismatch based at least in part on symmetrical characteristics of the physical object;causing, by the processor, the 3D model of the physical object to match the position and the scale of the physical object based on the symmetry-based mismatch;overlaying, by the processor, a 3D virtual object at a first location on an exterior portion of the physical object in augmented reality based on a predetermined locational relation between the 3D virtual object and the 3D model of the physical object; andgenerating, by the processor, a composite view of the 3D virtual object overlaid on the exterior portion of the physical object, wherein the 3D virtual object is partially obstructed from view by the physical object at the viewpoint, wherein the 3D virtual object is an exterior object and a portion of ...

Подробнее
27-01-2022 дата публикации

METHOD FOR MANIPULATING A DENTAL VIRTUAL MODEL, METHOD FOR CREATING PHYSICAL ENTITIES BASED ON A DENTAL VIRTUAL MODEL THUS MANIPULATED, AND DENTAL MODELS THUS CREATED

Номер: US20220028532A1
Автор: Kopelman Avi, TAUB Eldad
Принадлежит: Align Technology, Inc.

A system for generating missing data for 3D models of intraoral structures of a patient. The system may include a hand-held intraoral scanner and a computer having instructions that the system to scan the intraoral structure of a patient to generate first 3D data of the surface of a prepared tooth, generate a 3D virtual model of the prepared tooth based on the first 3D data, determine the 3D virtual model is missing a surface associated with the prepared, generate second 3D data approximating the surface that is determined to be missing from the 3D virtual model intraoral structure that is missing, and combine the second 3D data with the 3D virtual model such that the 3D virtual model includes the approximated surface, wherein the approximated surface approximates the prepared tooth associated with the surface determined to be missing from the 3D virtual model. 1. A system for generating missing data for 3D models of intraoral structures of a patient , the system comprising:a hand-held intraoral scanner;a computer having instructions that, when executed, cause the system to:generate a 3D virtual model of a prepared tooth of a patient based on first 3D data generated from an intraoral scan of focused light on the prepared tooth;determine that the 3D virtual model does not include a portion of a surface of the prepared tooth;generate, in an automated manner, second 3D data based on the 3D virtual model, the 3D virtual model including an approximated surface that approximates the portion of the surface of the prepared tooth; andcombine the second 3D data with the 3D virtual model such that the 3D virtual model includes the approximated surface.2. The system of claim 1 , wherein the 3D virtual model that does not include a portion of a surface of the prepared tooth forms an incomplete closed geometrical form.3. The system of claim 1 , wherein the second 3D data is generated based on a cross-sectional profile of the 3D virtual model.4. The system of claim 1 , wherein the ...

Подробнее
12-01-2017 дата публикации

Methods and Apparatus for Sending or Receiving an Image

Номер: US20170011064A1
Принадлежит:

Methods of sending an image include receiving, from a requesting device, a request for an image associated with a geographic area, generating the image by determining a plurality of photograph thumbnails, each photograph thumbnail being associated with a respective location or sub-region within the geographic area, and forming the image from the photograph thumbnails, and sending the image to the requesting device. A method of receiving an image includes sending, to a server, a request for an image associated with a geographic area, and receiving the image from the server, wherein the image comprises an image formed from a plurality of photograph thumbnails, each photograph thumbnail being associated with a respective location or sub-region within the geographic area. 1. A method of sending an image , the method comprising:receiving, from a requesting device, a request for an image associated with a geographic area;generating the image by, for each of a plurality of portions of the image, generating image data for the portion from data representing a geographic location or sub-region that is within the geographic area and associated with the portion, and forming the image from the image data generated for the portions; andsending the generated image to the requesting device.2. The method of claim 1 , further comprising sending at least one map tile to the requesting device claim 1 , each map tile comprising an image of a portion of a world map wherein at least one of the at least one map tile comprises an image of a portion of the world map corresponding to the geographic area.3. The method of claim 1 , wherein the request includes a zoom level of a user interface on which the image is to be displayed claim 1 , and generating the image further comprises basing a size and/or resolution of the image at least in part on the zoom level.4. The method of claim 1 , wherein the data representing the geographic location or sub-region associated with a portion of the image ...

Подробнее
08-01-2015 дата публикации

Discrete objects for building virtual environments

Номер: US20150012890A1
Принадлежит: Microsoft Corp

Described is a virtual environment built by drawing stacks of three-dimensional objects (e.g., discrete blocks) as manipulated by a user. A user manipulates one or more objects, resulting in stack heights being changed, e.g., by adding, removing or moving objects to/from stacks. The stack heights are maintained as sample points, e.g., each point indexed by its associated horizontal location. A graphics processor expands height-related information into visible objects or stacks of objects by computing the vertices for each stack to draw that stack's top surface, front surface and/or side surface based upon the height-related information for that stack. Height information for neighboring stacks may be associated with the sample point, whereby a stack is only drawn to where it is occluded by a neighboring stack, that is, by computing the lower vertices for a surface according to the height of a neighboring stack where appropriate.

Подробнее
10-01-2019 дата публикации

THREE-DIMENSIONAL PRINTING APPARATUS AND THREE-DIMENSIONAL PRINTING METHOD

Номер: US20190011902A1
Автор: LU Ting-Yu, SU Ching-Hua
Принадлежит:

A 3D printing method adapted to a 3D printing apparatus is provided. The 3D printing apparatus is configured to edit a plurality of sliced images, and execute a 3D printing operation according to the edited sliced images. The 3D printing method includes: analyzing a plurality of sliced objects of the sliced images, so as to draw a plurality of sliced object casings according to individual contours of the sliced objects, where the sliced object casings respectively include a part of the sliced objects; and respectively deleting the other parts of the sliced objects outside the sliced object casings, and integrating the sliced object casings of the sliced images to obtain a 3D model casing. Moreover, the 3D printing apparatus applying the 3D printing method is also provided. 1. A three-dimensional printing method , adapted to a three-dimensional printing apparatus , wherein the three-dimensional printing apparatus is configured to horizontally slice a three-dimensional model to obtain a plurality of sliced images , and edit the sliced images to execute a three-dimensional printing operation according to the edited sliced images , the three-dimensional printing method comprising:analyzing a plurality of sliced objects of the sliced images, so as to draw a plurality of sliced object casings according to individual contours of the sliced objects, wherein the sliced object casings respectively comprise a part of the sliced objects; andrespectively deleting the other parts of the sliced objects outside the sliced object casings, and integrating the sliced object casings of the sliced images to obtain a three-dimensional model casing.2. The three-dimensional printing method as claimed in claim 1 , wherein the sliced object casings respectively have a same predetermined thickness.3. The three-dimensional printing method as claimed in claim 1 , further comprising:analyzing an outer three-dimensional contour of the three-dimensional model casing to obtain a plurality of ...

Подробнее
14-01-2016 дата публикации

High-Quality Stereo Reconstruction Featuring Depth Map Alignment and Outlier Identification

Номер: US20160012633A1
Принадлежит:

A novel stereo reconstruction pipeline that features depth map alignment and outlier identification is provided. One example method includes obtaining a plurality of images depicting a scene. The method includes determining a pose for each of the plurality of images. The method includes determining a depth map for each of the plurality of images such that a plurality of depth maps are determined Each of the plurality of depth maps describes a plurality of points in three-dimensional space that correspond to objects in the scene. The method includes aligning the plurality of depth maps by transforming one or more of the plurality of depth maps so as to improve an alignment between the plurality of depth maps. The method includes identifying one or more outlying points. The method includes generating a three-dimensional model of the scene based at least in part on the plurality of depth maps. 1. A computer-implemented method for generating three-dimensional models , the method comprising:obtaining, by one or more computing devices, a plurality of images depicting a scene;determining, by the one or more computing devices, a pose for each of the plurality of images;determining, by the one or more computing devices, a depth map for each of the plurality of images such that a plurality of depth maps are determined, wherein each of the plurality of depth maps describes a plurality of points in three-dimensional space that correspond to objects in the scene;aligning, by the one or more computing devices, the plurality of depth maps by transforming one or more of the plurality of depth maps so as to improve an alignment between the plurality of depth maps;after aligning the plurality of depth maps, identifying, by the one or more computing devices, one or more of the plurality of points described by one or more of the plurality of depth maps as one or more outlying points; andgenerating, by the one or more computing devices, a three-dimensional model of the scene based at ...

Подробнее
11-01-2018 дата публикации

METHODS AND SYSTEMS OF PERFORMING EYE RECONSTRUCTION USING A PARAMETRIC MODEL

Номер: US20180012418A1
Принадлежит:

Systems and techniques for reconstructing one or more eyes using a parametric eye model are provided. The systems and techniques may include obtaining one or more input images that include at least one eye. The systems and techniques may further include obtaining a parametric eye model including an eyeball model and an iris model. The systems and techniques may further include determining parameters of the parametric eye model from the one or more input images. The parameters can be determined to fit the parametric eye model to the at least one eye in the one or more input images. The parameters include a control map used by the iris model to synthesize an iris of the at least one eye. The systems and techniques may further include reconstructing the at least one eye using the parametric eye model with the determined parameters. 1. A computer-implemented method of reconstructing one or more eyes , comprising:obtaining one or more input images, the one or more input images including at least one eye;obtaining a parametric eye model, the parametric eye model including an eyeball model and an iris model;determining parameters of the parametric eye model from the one or more input images, the parameters being determined to fit the parametric eye model to the at least one eye in the one or more input images, wherein the parameters include a control map used by the iris model to synthesize an iris of the at least one eye; andreconstructing the at least one eye using the parametric eye model with the determined parameters.2. The method of claim 1 , wherein the one or more input images include a three-dimensional face scan of at least a portion of a face including the at least one eye claim 1 , the three-dimensional face scan being from a multi-view scanner.3. The method of claim 2 , wherein the parameters include a shape parameter corresponding to a shape of an eyeball of the at least one eye claim 2 , and wherein determining the shape parameter includes fitting the ...

Подробнее
11-01-2018 дата публикации

Computer-Implemented Method For Positioning Patterns Around An Avatar

Номер: US20180012420A1
Принадлежит:

A computer-implemented method for designing a virtual garment or upholstery (G) in a three-dimensional scene comprising the steps of: a) providing a three-dimensional avatar (AV) in the three-dimensional scene; b) providing at least one pattern (P) of said virtual garment or upholstery in the three-dimensional scene; c) determining a distance field from a surface of the avatar; d) positioning the pattern relative to the avatar by keeping a fixed orientation with respect to said distance field; and e) assembling the positioned pattern or patterns around the avatar to form said virtual garment or upholstery, and draping it onto the avatar. A computer program product, non-volatile computer-readable data-storage medium and Computer Aided Design system for carrying out such a method. Application of the method to the manufacturing of a garment or upholstery. 1. A computer-implemented method for designing a virtual garment or upholstery (G) in a three-dimensional scene comprising the steps of:a) providing at least one pattern (P) of said virtual garment or upholstery in the three-dimensional scene;b) providing a three-dimensional avatar (AV) in the three-dimensional scene;c) computing a distance field from a surface of the avatar, wherein each point of the 3D virtual space containing the avatar is attributed a numerical value expressing its distance from the nearest point of the surface of the avatar;d) positioning the pattern relative to the avatar by keeping a fixed orientation with respect to said distance field; ande) assembling the positioned pattern or patterns around the avatar to form said virtual garment or upholstery, and draping it onto the avatar.2. The computer-implemented method of wherein said step d) comprises the sub-steps of:{'b': '1', 'd) pre-positioning the pattern relative to the avatar using a positioning device (KB, PD); and'}{'b': '3', 'd) automatically rotating the pattern so that a normal direction (NP), or average normal direction, to said ...

Подробнее
10-01-2019 дата публикации

CLOUD ENABLED AUGMENTED REALITY

Номер: US20190012840A1
Принадлежит:

An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment and a location sensor to capture location data describing a geolocation of the client device. The client device creates a three-dimensional (3-D) map with the image data and the location data for use in generating virtual objects to augment reality. The client device transmits the created 3-D map to an external server that may utilize the 3-D map to update a world map stored on the external server. The external server sends a local portion of the world map to the client device. The client device determines a distance between the client device and a mapping point to generate a computer-mediated reality image at the mapping point to be displayed on the client device. 1. A method of generating computer mediated reality data on a client device , the method comprising:capturing image data with a camera integrated in the client device, the image data representing a near real-time view of an environment around the client device;capturing location data with a location sensor integrated in the client device, the location data describing a spatial position of the client device in the environment;generating three-dimensional (3-D) map data based on the image data and the location data, the 3-D map data spatially describing the environment around the client device;transmitting the 3-D map data to an external server;receiving a local portion of world map data at the client device from the external server, wherein the local portion is selected based on the 3-D map data;determining a distance between a mapping point in the local portion of world map data and the spatial position of the client device based on location data and the local portion of world map data;generating a computer mediated reality image at the mapping point in the local portion of world map data based on the image data and the ...

Подробнее
10-01-2019 дата публикации

WEARABLE ITEM VISUALIZER

Номер: US20190012842A1
Принадлежит:

Visualizing a wearable item can include: generating a user interface that enables a user to choose a target body for visualizing a wearable item from among a set of available target bodies; and generating a visualization of the wearable item on the target body chosen by the user by deforming the wearable item to fit the target body chosen by the user. 1. A wearable item visualizer , comprising:a user interface mechanism that generates a user interface that enables a user to browse and select among a variety of available target bodies for use in visualizing a wearable item that is adapted to fit a reference body; anda computing mechanism that generates a visualization depicting the wearable item on the available target body selected by the user by deforming the wearable item from fitting the reference body into fitting the available target body selected by the user such that the visualization is presented back to the user via the user interface.2. The wearable item visualizer of claim 1 , wherein the wearable item is deformed by generating a triangle mesh for the wearable item and deforming the triangle mesh to fit the available target body chosen by the user.3. The wearable item visualizer of claim 2 , wherein an internal geometry of the triangle mesh is restored after deformation to an internal geometry of the triangle mesh before deformation.4. The wearable item visualizer of claim 3 , wherein a set of image data for the wearable item is transferred to the triangle mesh after deformation and after restoring the internal geometry.5. The wearable item visualizer of claim 1 , wherein the wearable item is deformed in response to a set of landmarks for the wearable item and a set of landmarks for the available target body chosen by the user.6. The wearable item visualizer of claim 5 , wherein the landmarks for the wearable item are derived from a set of landmarks of the reference body.7. The wearable item visualizer of claim 5 , wherein the landmarks for the available ...

Подробнее
14-01-2021 дата публикации

Dilated Fully Convolutional Network for 2D/3D Medical Image Registration

Номер: US20210012514A1
Принадлежит:

A method and system for 3D/3D medical image registration. A digitally reconstructed radiograph (DRR) is rendered from a 3D medical volume based on current transformation parameters. A trained multi-agent deep neural network (DNN) is applied to a plurality of regions of interest (ROIs) in the DRR and a 2D medical image. The trained multi-agent DNN applies a respective agent to each ROI to calculate a respective set of action-values from each ROI. A maximum action-value and a proposed action associated with the maximum action value are determined for each agent. A subset of agents is selected based on the maximum action-values determined for the agents. The proposed actions determined for the selected subset of agents are aggregated to determine an optimal adjustment to the transformation parameters and the transformation parameters are adjusted by the determined optimal adjustment. The 3D medical volume is registered to the 2D medical image using final transformation parameters resulting from a plurality of iterations. 1. A method for automated computer-based registration of a 3D medical volume to a 2D medical image , comprising:rendering a 2D digitally reconstructed radiograph (DRR) from the 3D medical volume based on current transformation parameters;determining, by an intelligent artificial agent, an action-value for each of a plurality of possible actions based on a region of interest (ROI) in the DRR and a ROI in the 2D medical image, the plurality of possible actions corresponding to predetermined adjustments of the transformation parameters;selecting an action from the plurality of possible actions based on the action-values;adjusting the current transformation parameters by applying the selected action to provide adjusted transformation parameters;repeating the rendering, the determining, the selecting, and the adjusting steps for a plurality of iterations using the adjusted transformation parameters as the current transformation parameters; andregistering ...

Подробнее
14-01-2021 дата публикации

Adjusting a digital representation of a head region

Номер: US20210012575A1
Принадлежит:

Methods and devices for generating reference data for adjusting a digital representation of a head region, and methods and devices for adjusting the digital representation of a head region are disclosed. In some arrangements, training data are received. A first machine learning algorithm generates first reference data using the training data. A second machine learning algorithm generates second reference data using the same training data and the first reference data generated by the first machine learning algorithm. 1. A method of generating reference data for adjusting a digital representation of a head region , the method comprising:receiving training data comprising:a set of input patches, each input patch comprising a target feature of a digital representation of a head region prior to adjustment of the digital representation of the head region, wherein the target feature is the same for each input patch; and a set of output patches in one-to-one correspondence with the input patches, each output patch comprising the target feature of the digital representation of the head region after adjustment of the digital representation of the head region; using a first machine learning algorithm to generate first reference data using the training data, the first reference data comprising editing instructions for adjusting the digital representation of the head region for a range of possible digital representations of the head region; andusing a second machine learning algorithm to generate second reference data using the same training data as the first machine learning algorithm and the first reference data generated by the first machine learning algorithm, the second reference data comprising editing instructions for adjusting the digital representation of the head region for a range of possible digital representations of the head region.2. The method of claim 1 , wherein:the first reference data comprise first editing instructions for a range of possible configurations ...

Подробнее
09-01-2020 дата публикации

AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

Номер: US20200013224A1
Принадлежит:

Augmenting real-time views of a patient with three-dimensional (3D) data. In one embodiment, a method may include identifying 3D data for a patient with the 3D data including an outer layer and multiple inner layers, determining virtual morphometric measurements of the outer layer from the 3D data, registering a real-time position of the outer layer of the patient in a 3D space, determining real-time morphometric measurements of the outer layer of the patient, automatically registering the position of the outer layer from the 3D data to align with the registered real-time position of the outer layer of the patient in the 3D space using the virtual morphometric measurements and using the real-time morphometric measurements, and displaying, in an augmented reality (AR) headset, one of the inner layers from the 3D data projected onto real-time views of the outer layer of the patient. 1. A method for augmenting real-time , non-image actual views of a patient with three-dimensional (3D) data , the method comprising:identifying 3D data for the patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient; anddisplaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being confined within a volume of a virtual 3D shape.2. The method as recited in claim 1 , wherein:the virtual 3D shape is a virtual box; andthe virtual box includes a top side, a bottom side, a left side, a right side, a front side, and a back side.3. The method of claim 1 , wherein:the virtual 3D shape is configured to be controlled to toggle between displaying and hiding lines of the virtual 3D shape; andthe virtual 3D shape is configured to be controlled to reposition two-dimensional (2D) slices and/or 3D slices of the projected inner layer of the patient from the 3D data.4. The ...

Подробнее
09-01-2020 дата публикации

METHOD AND APPARATUS FOR PROCESSING PATCHES OF POINT CLOUD

Номер: US20200013235A1

A method and an apparatus for processing patches of a point cloud are provided. The apparatus includes an input/output (I/O) device, a storage device, and a processor. The I/O device is used to receive a bit stream of the point cloud. The storage device is configured to store an index table recording indexes corresponding to a plurality of orientations. The processor is coupled to the I/O device and the storage device and is configured to execute a program to demultiplex the bit stream of the point cloud into a patch image and indexes corresponding to a plurality of patches in the patch image, look up the index table obtain an orientation of each patch, transform the patch image according to the orientation to recover the plurality of patches of the point cloud, and reconstruct the point cloud by using the recovered patches. 1. An apparatus for processing patches of a point cloud , comprising:an input/output (I/O) device, receiving point cloud data;a storage device, storing an index table recording indexes corresponding to a plurality of orientations;a processor, coupled to the input/output device and the storage device, and configured to execute a program to:generate a plurality of patches of the point cloud, wherein the point cloud comprises a plurality of points in a three-dimensional space, and each of the patches corresponds to a portion of the point cloud;determine an orientation in which each patch is adapted to generate a patch image and transform each patch to generate the patch image according to the orientation; andpack the patch image and look up the index table to obtain the index corresponding to the orientation of each patch.2. The apparatus for processing patches of the point cloud as claimed in claim 1 , wherein the processor further determines that whether each patch is adapted to rotate to a predetermined orientation claim 1 , places the patch into the patch image after rotating the patch if the patch is adapted to rotate claim 1 , and directly ...

Подробнее
15-01-2015 дата публикации

HYBRID PRECISION TRACKING

Номер: US20150016680A1
Принадлежит:

Disclosed herein are through-the-lens tracking systems and methods which can enable sub-pixel accurate camera tracking suitable for real-time set extensions. That is, the through-the-lens tracking can make an existing lower precision camera tracking and compositing system into a real-time VFX system capable of sub-pixel accurate real-time camera tracking. With this enhanced level of tracking accuracy the virtual cameras can be used to register and render real-time set extensions for both interior and exterior locations. 1. A hybrid through-the-lens tracking system , comprising:a first system that includes a calibrated look-up table and is configured to use the calibrated look-up table to modify search locations for scene markers when using adjustable lenses for motion picture and television visual effects production;the first system is configured to transmit predicted 2D locations of the scene markers to a second system; andthe second system is configured to use a high speed GPU-based scene marker detection scheme that uses a fast geometry based search to calculate the predicted 2D locations of the centers of the scene markers.2. The system of wherein the high speed detection is under 10 milliseconds per marker.3. The system of wherein the high speed detection is between 2 and 10 milliseconds per marker.4. The system of wherein the first system further includes a data combiner operatively connected to the look-up table and a target XY predictor operatively connected to the data combiner.5. The system of wherein the system is embodied in a hardware component.6. The system of wherein the second system includes a motion blur removal stage claim 1 , a derivative detector claim 1 , a parallel line remover claim 1 , a circle center finder claim 1 , and a center voting module.7. The system of wherein the scene markers are circular markers.8. The system of wherein the scene markers are fiducial markers claim 1 , bar code markers or natural feature markers.9. A hybrid ...

Подробнее
18-01-2018 дата публикации

PLANNING SUPPORT DURING AN INTERVENTIONAL PROCEDURE

Номер: US20180014884A1
Принадлежит:

Methods and systems are disclosed herein for improved safer planning support during interventional procedures for inserting stents into a hollow organ of a patient by a guide device. One method includes: providing or recording a three-dimensional image data set of the hollow organ in a first position; segmentation or providing a segmentation of the three-dimensional image data set; providing or recording a two-dimensional image of the guide device introduced into the hollow organ; overlaying the three-dimensional image data set with the two-dimensional image; determining at least one corrected position of one or more section(s) of the hollow organ respectively using the overlaying of the three-dimensional image data set with the two-dimensional image; and determining the respective deformation energy of the hollow organ in the section(s) for the case of removal of the guide device using the previously determined corrected position compared to the first position. 1. A method of planning support for an interventional procedure for introducing a stent into a hollow organ of a patient by a guide device , the method comprising:providing or recording a three-dimensional image data set of the hollow organ in a first position;segmenting or providing a segmentation of the three-dimensional image data set;providing or recording an at least two-dimensional image of the guide device introduced into the hollow organ;overlaying the three-dimensional image data set with the at least two-dimensional image to provide an overlaid image;determining at least one corrected position of one or more sections of the hollow organ respectively using the overlaid image; anddetermining the respective deformation energy of the hollow organ in the one or more section for a case of removal of the guide device using the determined corrected position compared to the first position.2. The method of claim 1 , wherein the first position is an original position.3. The method of claim 1 , wherein the ...

Подробнее
14-01-2021 дата публикации

METHOD FOR DETERMINING LISTENER-SPECIFIC HEAD-RELATED TRANSFER FUNCTIONS

Номер: US20210014631A1
Принадлежит:

A method for determining listener-specific head-related transfer functions is described. The method comprising the steps of: A) providing a visual representation of the head and each of the auricles, wherein for each auricle the visual representation includes visual information of the overall shape of the auricle and of anatomical components of the auricle; B) calculating, using said visual representation, three-dimensional polygon meshes, including a head mesh and independent auricle meshes, which respectively model the shapes of the head and auricles, wherein the auricle meshes () preferably include shape information of auricle components such as the entry () of the ear canal, the concha (), the fossa (), and the backside of the auricle (); C) merging the polygon meshes to a three-dimensional combined mesh, in which the auricle meshes are located at proper locations with respect to the head mesh; D) calculating HRTFs based on the combined mesh. 1. Method for determining head-related transfer functions (HRTFs) , wherein said HRTFs are listener-specific with respect to a specific individual , where said HRTFs correlate with physical characteristics of the individual including the shapes of the individual's head and auricles , the method comprising the steps of:A) providing a visual representation of the head and each of the auricles, wherein the visual representation includes visual information of the overall shape of the auricles and of anatomical components of each of the auricles;B) calculating, using said visual representation, a three-dimensional representation formed by polygon meshes, including auricle meshes and a head mesh which are independent of each other, which respectively model the shapes of the head and auricles, the auricle meshes comprising information about the shape of the mentioned anatomical components of the auricle;C) merging the polygon meshes to a three-dimensional combined mesh, in which the auricle meshes are located at proper locations ...

Подробнее
03-02-2022 дата публикации

REAL-TIME GESTURE RECOGNITION METHOD AND APPARATUS

Номер: US20220036050A1
Принадлежит:

Disclosed are methods, apparatus and systems for real-time gesture recognition. One exemplary method for the real-time identification of a gesture communicated by a subject includes receiving, by a first thread of the one or more multi-threaded processors, a first set of image frames associated with the gesture, the first set of image frames captured during a first time interval, performing, by the first thread, pose estimation on each frame of the first set of image frames including eliminating background information from each frame to obtain one or more areas of interest, storing information representative of the one or more areas of interest in a shared memory accessible to the one or more multi-threaded processors, and performing, by a second thread of the one or more multi-threaded processors, a gesture recognition operation on a second set of image frames associated with the gesture. 1. A method for real-time recognition , using one or more multi-threaded processors , of a gesture communicated by a subject , the method comprising:receiving, by a first thread of the one or more multi-threaded processors, a first set of image frames associated with the gesture, the first set of image frames captured during a first time interval;performing, by the first thread, pose estimation on each frame of the first set of image frames including eliminating background information from each frame to obtain one or more areas of interest;storing information representative of the one or more areas of interest in a shared memory accessible to the one or more multi-threaded processors; andperforming, by a second thread of the one or more multi-threaded processors, a gesture recognition operation on a second set of image frames associated with the gesture, the second set of image frames captured during a second time interval that is different from the first time interval, using a first processor of the one or more multi-threaded processors that implements a first three-dimensional ...

Подробнее
18-01-2018 дата публикации

WEARABLE DEVICES SUCH AS EYEWEAR CUSTOMIZED TO INDIVIDUAL WEARER PARAMETERS

Номер: US20180017815A1
Принадлежит:

Features are disclosed relating to an article such as eyewear customized to individual wearer parameters (e.g., measurements, preferences, etc.), and to systems and methods for customizing eyewear to individual wearer parameters. The system includes an input for receiving data representative of a three dimensional configuration of a portion of a wearer's face and an input for receiving data representative of a desired position where the wearer would like an eyewear frame to reside on the wearer's face. One system also includes a processor for determining a change in configuration of an eyewear component blank to allow the eyewear frame to reside in the desired position, and an eyewear component modifier for modifying the eyewear component blank so that the frame will reside in the desired position. 123.-. (canceled)24. A three dimensional orientationally corrected eyeglass comprising:a frame;at least one lens;a left earstem;a right earstem; anda nonadjustable nosepiece,wherein the nosepiece comprises bilateral asymmetry configured to complement a bilateral asymmetry of a wearer's face, to position the eyeglass in a preselected orientation with respect to the wearer's face.25. A three dimensional orientationally corrected eyeglass as in claim 24 , wherein at least a portion of the eyeglass conforms to a surface of a wearer model representative of a three-dimensional configuration of at least a portion of the wearer's head.26. A three dimensional orientationally corrected eyeglass as in claim 25 ,wherein the wearer model defines a calculated straight-ahead line of sight crossing the center of a pupil and extending in an anterior-posterior direction along a horizontal plane parallel to the wearer model's central transverse plane, andwherein at least a portion of an eyeglass model representative of a three-dimensional configuration of the eyeglass is deflected, with respect to a default configuration, when the eyeglass model is placed in an as-worn configuration on the ...

Подробнее
18-01-2018 дата публикации

METHOD OF BUILDING A THREE-DIMENSIONAL NETWORK SITE, NETWORK SITE OBTAINED BY THIS METHOD, AND METHOD OF NAVIGATING WITHIN OR FROM SUCH A NETWORK SITE

Номер: US20180018075A1
Автор: HEULLY Herve
Принадлежит:

The invention relates to a method for producing network sites, in particular websites, offering real immersion in the sites (in the manner of video games) with intuitive and fluid navigation that does not require a means for directing the avatar, allowing selective referencing by a search engine of objects contained on the site, as well as providing improved access security. A simple mechanical control means (arrow keys on a keyboard, mouse without click buttons, joystick formed by a handle on a base with push buttons) or virtual control means (arrow-based computer representation, system for the detection of a movement of the hand, eye, etc., accelerometer remote control, etc.) can be used to direct the avatar, and the method of the invention allows the movements of the avatar to be interpreted, such as a simple walk through the site or a command to navigate to another space on the site (same URL) or to another site (different URL). 138-. (canceled)39. Method of building a so-called “three-dimensional” network site (A-B) , such as an Internet site , consultable via an interface linked to the network , characterized in that it comprises the following steps: [{'b': 1', '6', '1', '3, 'a1) Generating a three-dimensional project comprising at least one three-dimensional space (A-A; B-B);'}, {'b': 1', '26', '1', '6', '1', '3', '11, 'i': 'b', 'a2) creating at least one two- or three-dimensional so-called “navigation” object (N-N) in the said project and placing it in the or one of the spaces (A-A; B-B) in a defined spatial position (P), called the “position of the navigation object”;'}, {'b': 1', '6', '1', '3, 'a3) creating a two- or three-dimensional object (1) which is mobile in the said space (A-A; B-B) and controllable by a user by virtue of a control interface or a peripheral, such as a mouse, keyboard keys, a joystick or a motion sensor, linked to the interface;'}], 'A) with a 3D modelling software package {'b': 1', '6', '1', '3, 'b1) assigning to the or to each ...

Подробнее
19-01-2017 дата публикации

MULTI-STAGE METHOD OF GENERATING 3D CIVIL SITE SURVEYS

Номер: US20170018113A1
Автор: KHALOO Ali, LATTANZI DAVID
Принадлежит: George Mason University

A method of creating a three-dimensional model, based on two-dimensional (hereinafter “2D”) images is provided. The method includes acquiring a number of images of a number of physical locations, wherein each image is associated with one image group of a number of hierarchical image groups, the number of hierarchical image groups including a base image group, converting images within a group to a number of 3D models, wherein each 3D model is associated with one model group of a number of hierarchical model groups, the number of hierarchical model groups including a base model group, merging a number of the 3D models from the base model group and a number of 3D models from another 3D model group to create a multi-scale 3D model, and utilizing the multi-scale 3D model. 1. A method of creating a three dimensional model based on two dimensional images comprising:acquiring a number of images of a number of physical locations, wherein each said image is associated with one image group of a number of hierarchical image groups, said number of hierarchical image groups including a base image group and a top image group;converting images within a group to a number of 3D models, wherein each said 3D model is associated with one model group of a number of hierarchical model groups, said number of hierarchical model groups including a base model group and a top model group;merging a number of said 3D models from said base model group and a number of 3D models from another 3D model group to create a multi-scale 3D model; andutilizing said multi-scale 3D model.2. The method of wherein acquiring a number of images includes acquiring a number of pictures of a number of physical locations claim 1 , wherein each said picture is associated with one picture group of a number of hierarchical picture groups claim 1 , said number of hierarchical picture groups including a base picture group claim 1 , a number of intermediate hierarchical picture groups claim 1 , and a top picture group.3. ...

Подробнее
21-01-2016 дата публикации

RETAIL SPACE PLANNING SYSTEM

Номер: US20160019717A1
Автор: PILON Charles, YOPP John
Принадлежит:

A three dimensional virtual retail space representing a physical space for designing a retail store space layout is provided. A three dimensional virtual object representing at least one physical object for the retail space is provided. Input can be received from a virtual reality input interface for interacting with the virtual object in the virtual retail space. Based on the input, the virtual object can be placed in the virtual retail space. An updated video signal can be sent to a head mounted display that provides a three dimensional representation of the virtual object in the virtual space.

Подробнее
03-02-2022 дата публикации

ANIMATION PRODUCTION SYSTEM

Номер: US20220036624A1
Принадлежит:

To reduce the burden of animation production, a system for producing an animation in a virtual space, the system comprising: an asset management unit that places a first object and a second object in the virtual space; and a control unit that adjusts size of the second object in accordance with size of the first object 1. A system for producing an animation in a virtual space , the system comprising:an asset management unit that places a first object, a second object, and a virtual camera in the virtual space; anda control unit that adjusts size of the second object in a coordinate system of the virtual space in accordance with sizes of the first object and a field angle of the virtual camera.2. The animation production system according to claim 1 , wherein the control unit adjusts the height of the second object in accordance with the height of the first object.3. (canceled)4. The animation production system according to claim 3 , wherein the controller adjusts the size of the second object in the coordinate system of the virtual space to a size equal to or greater than the size of shooting range of the virtual camera.5. The animation production system according to claim 1 , further comprising a storage unit for storing size data of the first object and the second object claim 1 , wherein the control unit adjusts the size of the second object using a ratio obtained from the size data of the first object stored in the storage unit and the size of the first object in the virtual space.6. The animation production system of claim 5 , wherein the size data is height data.7. A method for producing animations in a virtual space claim 5 , whereina computer executes:a step of placing a first object a second object, and a virtual camera in the virtual space; anda step of adjusting size of the second object in a coordinate system of the virtual space to size of the first object and size of shooting range of the virtual camera in the coordinate system of the virtual space.8. A ...

Подробнее
03-02-2022 дата публикации

THREE-DIMENSIONAL EXPRESSION BASE GENERATION METHOD AND APPARATUS, SPEECH INTERACTION METHOD AND APPARATUS, AND MEDIUM

Номер: US20220036636A1
Автор: BAO Linchao, LIN Xiangkai
Принадлежит:

This application provides a three-dimensional (3D) expression base generation method performed by a computer device. The method includes: obtaining image pairs of a target object in n types of head postures, each image pair including a color feature image and a depth image in a head posture; constructing a 3D human face model of the target object according to then image pairs; and generating a set of expression bases of the target object according to the 3D human face model of the target object. According to this application, based on a reconstructed 3D human face model, a set of expression bases of a target object is further generated, so that more diversified product functions may be expanded based on the set of expression bases. 1. A computer-implemented method performed by a computer device , the method comprising:{'sup': th', 'th, 'obtaining n sets of image pairs of a target object inn types of head postures, the n sets of image pairs comprising color feature images and depth images in the n types of head postures, an ihead posture being corresponding to an iset of image pair, n being a positive integer, 0 Подробнее

18-01-2018 дата публикации

METHODS AND SYSTEMS FOR DISPLAYING DIGITAL SMART OBJECTS IN A THREE DIMENSIONAL ENVIRONMENT

Номер: US20180018696A1
Принадлежит:

Using various embodiments, methods and systems for displaying digital smart objects in D environments are described. In one embodiment, a system receives a request to present the D digital smart object in a game development environment of a game engine. The system can be configured to retrieve D digital smart object data from an asset repository, transmit the D digital smart object data to the game development environment of the game engine, receive a position location for the D digital smart object in the game, receive scaling information related to the D digital smart object, and store, into the asset repository, the position location, and scaling information related to the D digital smart object displayed in the game. Thereafter, the D digital smart object can be displayed at the position location when a player is interacting with the game at the game scene. 1. A method of displaying three-dimensional (3D) objects in a game development environment of a game engine during development of a 3D interactive environment , comprising:transmitting, by the game development environment of the game engine, a request to receive a 3D digital smart object, wherein the 3D digital smart object is enclosed in a 3D placeholder, and wherein the 3D smart object is further associated with event triggers configured to transmit user interaction or viewability information to a computing device by:determining a proportion of the 3D smart object on a graphical user interface of a 3D environment, andobtaining a percentage of the screen of the graphical user interface that the 3D smart object is covering, wherein the obtaining is performed using a screen bounding function of a 3D engine of the 3D environment;receiving, by the game development environment of the game engine, the 3D digital smart object and its associated 3D digital smart object data from the computing device, wherein the 3D digital smart object data is associated with an asset category and asset type, the 3D digital smart ...

Подробнее
18-01-2018 дата публикации

METHODS AND SYSTEMS FOR DETERMINING USER INTERACTION BASED DATA IN A VIRTUAL ENVIRONMENT TRANSMITTED BY THREE DIMENSIONAL ASSETS

Номер: US20180018698A1
Принадлежит:

In one embodiment, a plurality of 3D digital assets that can be associated with scripts to transmit user interaction, when displayed within a 3D environment in a client machine. The system includes a 3D digital asset processing system configured to receive user interaction data related to the 3D digital asset from the client machine and generate metrics related to user interaction with the 3D digital asset. In one embodiment, the metrics are generated by determining whether the 3D digital asset, comprising a collidable mesh, is drawn on a culling mask of a camera, and further using ray casting, drawing a line between the camera and the 3D digital asset. When the line collides with the collidable mesh of the 3D digital asset, using a screen bounding function of a 3D engine of the virtual environment, a proportion of the 3D digital asset on a user interface is determined to obtain a percentage of the user interface that is covered by the 3D digital asset. Thereafter, data related to user interaction with the 3D digital asset in the 3D environment is determined using the percentage. 1. A system to determine and record user interaction in a virtual three dimensional (3D) environment , comprising:an asset repository comprising a plurality of 3D digital assets, wherein each of the plurality of 3D digital assets can be displayed within the virtual 3D environment;a 3D digital asset processing system, coupled to the asset repository, configured to:receive data related to user interaction with a 3D digital asset out of the plurality of 3D digital assets in the virtual 3D environment, andgenerate metrics related to user interaction, including tapping, touching, moving, time spent, viewing, requesting detailed description related to the 3D digital asset in the virtual 3D environment; anda client computer, coupled to the 3D digital asset processing system, configured to:display the 3D digital asset in the virtual 3D environment via a graphical user interface,determine whether ...

Подробнее
18-01-2018 дата публикации

METHODS AND SYSTEMS FOR GENERATING DIGITAL SMART OBJECTS FOR USE IN A THREE DIMENSIONAL ENVIRONMENT

Номер: US20180018828A1
Принадлежит:

Using various embodiments, methods and systems for generating three-dimensional (3D) digital smart objects for use in various 3D environments to collect data related to user interaction or viewability with the 3D digital smart object are described. In one embodiment, a system can generate a 3D digital smart object by presenting a 3D placeholder to a publisher or developer of the 3D environment, receive an asset, receive asset data including asset category and asset type associated to the asset and a minimum and maximum asset polygon count value. The system can also receive standard asset size information of the asset including an X-axis, Y-axis, and Z-axis dimension of the asset, and receive an asset positioning information, including an anchor location and an asset orientation of the asset. The data received can then be stored to generate the 3D digital smart object that can be placed within a 3D environment. 1. A method to generate a three-dimensional (3D) digital smart object in a development platform of a first 3D environment , comprising:receiving, by the development platform, a 3D placeholder, the development platform configured to receive the 3D placeholder via at least one of an Application Programming Interface (API) or a Software Development Kit (SDK);providing, by the development platform, an asset into the 3D placeholder;receiving, by the development platform, asset data associated with the asset, the asset data including asset category and asset type, and wherein the asset data further includes a minimum and maximum asset polygon count value;determining, by the development platform, standard asset size information of the asset, the standard asset size information including an X-axis, Y-axis, and Z-axis dimension of the asset;receiving, by the development platform, an asset positioning information, the asset positioning information including an anchor location and an asset orientation, wherein the anchor location being at least one of the X-axis, Y-axis, ...

Подробнее
17-01-2019 дата публикации

Holographic multi avatar training system interface and sonification associative training

Номер: US20190019321A1
Принадлежит: Visyn Inc

A system or method for training may display a student avatar and an expert avatar. A method may include capturing movement of a user attempting a technique, and generating a student avatar animation from the captured movement. The method may include retrieving a 3D expert avatar animation corresponding to the technique. The method may include displaying the 3D student avatar animation and the 3D expert avatar animation. For example, the animations may be displayed concurrently.

Подробнее
17-01-2019 дата публикации

Predictive Information for Free Space Gesture Control and Communication

Номер: US20190019332A1
Принадлежит: Leap Motion, Inc.

Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations. 129.-. (canceled)30. A method of capturing gestural motion of a control object in a three-dimensional (3D) sensory space , the method including:determining observation information characterizing a surface of a control object from at least one image of a gestural motion of the control object in a three-dimensional (3D) sensory space;constructing a 3D model to represent the control object by fitting one or more 3D subcomponents to the surface characterized; and determining an error indication between a point on the surface characterized and a corresponding point on at least one of the 3D subcomponents; and', 'responsive to the error indication adjusting the 3D model., 'improving representation of the gestural motion by the 3D model, including31. The method of claim 30 , wherein determining the error indication further includes determining whether the point on the surface and the corresponding point on the at least one of the 3D subcomponents are within a threshold distance.32. The method of claim 30 , wherein determining the error indication further includes:pairing points on the surface with points on axes of the 3D subcomponents, wherein surface points lie on vectors that are normal to the axes; anddetermining a reduced root mean squared deviation (RMSD) of distances between paired points.33. The method of claim 30 , wherein determining the error indication further includes:pairing points on the surface with points on the 3D subcomponents, wherein normal vectors to the points are parallel to each ...

Подробнее
21-01-2021 дата публикации

Predictive Information for Free Space Gesture Control and Communication

Номер: US20210019938A1
Принадлежит: Ultrahaptics IP Two Limited

Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations. 1. A method of capturing gestural motion of a control object in a three-dimensional (3D) sensory space , the method including:capturing observation information characterizing a surface of a control object from at least one image of a gestural motion of the control object in a three-dimensional (3D) sensory space; fitting, to the surface, 3D subcomponents with least colliding subcomponents; wherein colliding subcomponents are detected based at least on identifying a subcomponent attribute incompatible with an attribute of another subcomponent; and', 'adjusting the 3D model responsive to an error indication between a point on the surface characterized and a corresponding point on at least one of the 3D subcomponents; and, 'obtaining a 3D model to represent the control object, the 3D model constructed by fitting one or more 3D subcomponents to the surface characterized, wherein fitting includesusing the 3D model to facilitate control, communication and/or interaction with machines.2. The method of claim 1 , further including determining the error indication claim 1 , including determining whether the point on the surface and the corresponding point on the at least one of the 3D subcomponents are within a threshold distance.3. The method of claim 1 , further including determining the error indication claim 1 , including:pairing points on the surface with points on axes of the 3D subcomponents, wherein surface points lie on vectors that are normal to the axes; anddetermining a reduced root mean squared ...

Подробнее
21-01-2021 дата публикации

METHOD FOR AUTOMATICALLY GENERATING HIERARCHICAL EXPLODED VIEWS BASED ON ASSEMBLY CONSTRAINTS AND COLLISION DETECTION

Номер: US20210019956A1
Принадлежит:

A method for automatically generating hierarchical exploded views based on assembly constraints and collision detection, in which parts to be exploded are layered in explosion sequence according to a design result of the 3D assembly process planning, and the parts to be exploded in each layer are grouped based on the type and the disassembly direction; a feasible explosion direction of the parts in each layer is determined according to assembly constraints and collision detection; the explosion sequence and explosion direction of the parts in each layer are determined; and then the layered explosion is performed at a certain distance. Ball markers and a part-list are generated after all the parts are exploded. 1. A method for automatically generating hierarchical exploded views based on assembly constraints and collision detection , comprising:(1) layering a product or components to be exploded, and determining parts to be exploded in each layer;(2) grouping the parts to be exploded in each layer;wherein parts to be exploded in each group are the same or of the same type, and have the same disassembly direction;(3) performing a trial explosion on the parts to be exploded in each layer after grouped, to determine a feasible trial explosion direction of the parts to be exploded in each layer, thereby determining an explosion sequence and an explosion direction of the parts to be exploded in each layer, and performing a hierarchical explosion at a certain distance;wherein the trial explosion is performed through steps of: constructing an assembly constraint feature library; determining a trial explosion direction of a part to be exploded based on an assembly constraint feature thereof, moving the part a distance along the trial explosion direction thereof, and checking whether the part after moved interferes with other parts; if no interference occurs, it indicates that the part are able to be exploded in the trial explosion direction thereof in a current state, and ...

Подробнее
25-01-2018 дата публикации

EDITING CUTS IN VIRTUAL REALITY

Номер: US20180024630A1
Принадлежит:

A computer-implemented method is described for configuring interaction zones for a virtual reality environment. The method may include defining a plurality of scenes, each scene including a plurality of selectable scene cuts and defining a first interaction zone and a second interaction zone. The method may also include automatically selecting a first scene cut to display the first scene, in response to detecting that a gaze direction, associated with a user accessing the virtual reality environment, is directed toward the first interaction zone. The method may also include automatically selecting a second scene cut to display the second scene, in response to detecting that the gaze direction, associated with the user accessing the virtual reality environment, is directed toward the second interaction zone. The first or second scenes may be triggered for display in a head mounted display device. 1. A computer-implemented method of configuring interaction zones for a virtual reality environment , the method comprising:defining a plurality of scenes, each scene from the plurality of scenes including a plurality of selectable scene cuts;defining a first interaction zone and a second interaction zone, the first interaction zone being associated with a first scene in the plurality of scenes and the second interaction zone being associated with a second scene in the plurality of scenes;automatically selecting a first scene cut from the plurality of selectable scene cuts to display the first scene, in response to detecting that a gaze direction, associated with a user accessing the virtual reality environment, is directed toward the first interaction zone;automatically selecting a second scene cut from the plurality of selectable scene cuts to display the second scene, in response to detecting that the gaze direction, associated with the user accessing the virtual reality environment, is directed toward the second interaction zone; andtriggering for display in a head ...

Подробнее
28-01-2016 дата публикации

IMAGE REGISTRATION SYSTEM WITH NON-RIGID REGISTRATION AND METHOD OF OPERATION THEREOF

Номер: US20160027178A1
Принадлежит:

An image registration system, and a method of operation of an image registration system thereof, including: an imaging unit for obtaining a pre-operation non-invasive imaging volume and for obtaining an intra-operation non-invasive imaging volume; and a processing unit including: a rigid registration module for generating a rigid registered volume based on the pre-operation non-invasive imaging volume, a region of interest module for isolating a region of interest from the intra-operation non-invasive imaging volume, a point generation module, coupled to the region of interest module, for determining feature points of the region of interest, an optimization module, coupled to the point generation module, for matching the feature points with corresponding points of the rigid registered volume for generating a matched point cloud, and an interpolation module, coupled to the optimization module, for generating a non-rigid registered volume based on the matched point cloud for display on a display interface. 1. A method of operation of an image registration system comprising:obtaining a pre-operation non-invasive imaging volume with an imaging unit;generating a rigid registered volume based on the pre-operation non-invasive imaging volume;obtaining an intra-operation non-invasive imaging volume with the imaging unit;isolating a region of interest from the intra-operation non-invasive imaging volume;determining feature points of the region of interest;matching the feature points with corresponding points of the rigid registered volume for generating a matched point cloud; andgenerating a non-rigid registered volume based on the matched point cloud for display on a display interface.2. The method as claimed in further comprising:isolating anatomical structures from the pre-operation non-invasive imaging volume; andgenerating the rigid registered volume based on the anatomical structures.3. The method as claimed in further comprising:isolating a structure of interest from ...

Подробнее
25-01-2018 дата публикации

ELECTRONIC SYSTEM FOR CREATING AN IMAGE AND A METHOD OF CREATING AN IMAGE

Номер: US20180025532A1
Принадлежит:

An electronic system and a method for creating an image includes a display arranged to display a plurality of two-dimensional representations within a three-dimensional space, wherein the plurality of two-dimensional representations are arranged to individually represent a portion of a three-dimensional object within the three-dimensional space; and an imager arranged to capture the plurality of two-dimensional representations being displayed within the three-dimensional space; wherein the plurality of two-dimensional representations in a plurality of predefined positions are combined to form an image representative of the three-dimensional object within the three-dimensional space. 1. A method for creating an image comprising the step of:displaying a plurality of two-dimensional representations within a three-dimensional space, wherein the plurality of two-dimensional representations are arranged to individually represent a portion of a three-dimensional object within the three-dimensional space;recording the plurality of two-dimensional representations being displayed within the three-dimensional space; andcombining the plurality of two-dimensional representations in a plurality of predefined positions to form an image representative of the three-dimensional object within the three-dimensional space.2. The method according to claim 1 , wherein the plurality of two-dimensional representations include a plurality of cross-sectional images each represents the portion of the three-dimensional object at each of the plurality of predefined positions within the three-dimensional space.3. The method according to claim 2 , wherein the plurality of two-dimensional representations further include at least one of a plurality outline representations claim 2 , filled representations claim 2 , point cloud of the plurality of cross-sectional images of the three-dimensional object claim 2 , and a plurality of cross-sectional images obtained from tomography.4. The method according ...

Подробнее
25-01-2018 дата публикации

2D GRAPHICAL CODING TO CREATE A 3D IMAGE

Номер: US20180025547A1
Принадлежит:

There is disclosed a method of creating a three-dimensional image comprising: establishing a mapping between a two-dimensional template and the three-dimensional image; applying a graphic to the two-dimensional template; receiving the two-dimensional template with the graphic applied; and creating the three-dimensional image based on the mapping and the applied graphic. 1. A method of creating a three-dimensional image comprising:establishing a mapping between a two-dimensional template and the three-dimensional image;applying a graphic to the two-dimensional template;receiving the two-dimensional template with the graphic applied; andcreating the three-dimensional image based on the mapping and the applied graphic.2. The method of further comprising:mapping a plurality of points of the two-dimensional template to a respective plurality of points of the three-dimensional image; andmapping a plurality of points of the graphic applied to the two-dimensional template to the three-dimensional image based on said mapping.3. The method of or further comprising:mapping a plurality of predetermined points of the two-dimensional template to a respective plurality of predetermined points on the three-dimensional image; andbased on said mapping, mapping a plurality of points of the graphic applied to the two-dimensional template to the three-dimensional image.4. The method of any one of to further comprising:defining a plurality of reference points on the two-dimensional template, wherein the plurality of reference points uniquely identify the template.5. The method of wherein the plurality of reference points additionally define a reference grid for mapping positions on the two-dimensional template to positions on the three-dimensional image.6. The method of any one of to further comprising:transposing the two-dimensional template to the three-dimensional image with the graphic applied thereto.7. The method of any one of to further comprising:applying the graphic to the two- ...

Подробнее
10-02-2022 дата публикации

Method and system for optimizing roof truss designs

Номер: US20220043943A1
Автор: Maharaj Jalla
Принадлежит: Consulting Engineers Corp

The present invention is a computer implemented method of design a roof, the method comprising: mapping, a roof layout of a structure; identifying, a set of features of the roof layout, wherein the set of features identifies the slope and intersection of the surfaces of the roof layout; applying, a plurality of trusses over the roof layout in a predetermined orientation; generating, a profile of each of the plurality of trusses, wherein the profile is generated through the combination of the identified set of features of the roof layout and the orientation of the trusses; calculating, a weight of the roof layout based on the total weight of the trusses; and calculating, a difficulty rating of the roof layout.

Подробнее
10-02-2022 дата публикации

Textured mesh building

Номер: US20220044479A1
Принадлежит: Snap Inc

Systems and methods are provided for receiving a two-dimensional (2D) image comprising a 2D object; identifying a contour of the 2D object; generating a three-dimensional (3D) mesh based on the contour of the 2D object; and applying a texture of the 2D object to the 3D mesh to output a 3D object representing the 2D object.

Подробнее
10-02-2022 дата публикации

Virtual reality presentation of layers of clothing on avatars

Номер: US20220044490A1
Принадлежит: Linden Research Inc

A computing system and method to generate an avatar wearing multiple layers of clothing. For each clothing model acquired for the avatar, the system generates a customized clothing model based on transforming the original clothing model for fitting on the avatar based on deforming and physical simulation and a reduced clothing model based on collapsing the customized clothing model on the body of the avatar such that applying the reduced clothing model is simplified as painting the texture of the reduced clothing model onto the avatar model. Wearing the inner layers of the clothing by avatar is computed by applying the texture of the corresponding reduced clothing model on the body of the avatar in a sequence from inside layers to outside layers. The customized clothing model of the outermost layer is combined with the avatar wearing the inner layers to generate the avatar wearing the multiple layers of clothing.

Подробнее
10-02-2022 дата публикации

THREE-DIMENSIONAL FACE MODEL GENERATION METHOD AND APPARATUS, DEVICE, AND MEDIUM

Номер: US20220044491A1
Автор: BAO Linchao, LIN Xiangkai
Принадлежит:

A three-dimensional face model generation method is provided. The method includes: obtaining an inputted three-dimensional face mesh of a target object; aligning the three-dimensional face mesh with a first three-dimensional face model of a standard object according to face keypoints; performing fitting on the three-dimensional face mesh and a local area of the first three-dimensional face model, to obtain a second three-dimensional face model after local fitting; and performing fitting on the three-dimensional face mesh and a global area of the second three-dimensional face model, to obtain a three-dimensional face model of the target object after global fitting. 1. A three-dimensional face model generation method , applied to a computing device , the method comprising:obtaining a three-dimensional face mesh of a target object;aligning the three-dimensional face mesh with a first three-dimensional face model of a standard object according to face keypoints;performing fitting on the three-dimensional face mesh and a local area of the first three-dimensional face model, to obtain a second three-dimensional face model after local fitting; andperforming fitting on the three-dimensional face mesh and a global area of the second three-dimensional face model, to obtain a three-dimensional face model of the target object after global fitting.2. The method according to claim 1 , wherein performing fitting on the three-dimensional face mesh and the local area of the first three-dimensional face model comprises:performing registration on the three-dimensional face mesh and the local area of the first three-dimensional face model, to obtain a first correspondence between the local area and the three-dimensional face mesh; andperforming fitting on the three-dimensional face mesh and the local area of the first three-dimensional face model according to the first correspondence, to obtain the second three-dimensional face model after local fitting.3. The method according to claim ...

Подробнее
24-01-2019 дата публикации

TRAVERSAL SELECTION OF COMPONENTS FOR A GEOMETRIC MODEL

Номер: US20190026941A1
Принадлежит: DreamWorks Animation LLC

Systems and methods for traversal selection of components of a geometric model are disclosed. An embodiment includes displaying a plurality of components corresponding to a geometric model, selecting a first component, receiving a first input indicating a first direction for selecting a next component, wherein the next component is connected to the first component by an edge, identifying one or more candidate edges connected to the first component for selecting the next component, determining an angle between an indicated direction vector corresponding to the indicated first direction and each of the one or more candidate edges, and selecting a second component as the next component, wherein the second component is connected to the first component via a particular candidate edge forming a smallest angle with the indicated direction vector. 1. A method for selecting components of a geometric model , the method comprising:displaying a plurality of components corresponding to a geometric model;selecting a first component of the displayed plurality of components;receiving a first input of a first next direction for selecting a next component of the plurality of components;identifying one or more candidate edges associated with the first component for selecting the next component;determining an angle between a next direction vector corresponding to the first next direction and each of the remaining one or more candidate edges;selecting a second component as the next component wherein the second component is associated with a particular candidate edge forming a smallest angle with the next direction vector among the remaining one or more candidate edges; anddisplaying an indication of the selection of the second component as the next component.2. The method of claim 1 , further comprising storing information of the second component in a memory claim 1 , wherein the stored information comprises an identifier of the second component claim 1 , the particular candidate edge ...

Подробнее
25-01-2018 дата публикации

EMOTIONAL REACTION SHARING

Номер: US20180027307A1
Принадлежит:

One or more computing devices, systems, and/or methods for emotional reaction sharing are provided. For example, a client device captures video of a user viewing content, such as a live stream video. Landmark points, corresponding to facial features of the user, are identified and provided to a user reaction distribution service that evaluates the landmark points to identify a facial expression of the user, such as a crying facial expression. The facial expression, such as landmark points that can be applied to a three-dimensional model of an avatar to recreate the facial expression, are provided to client devices of users viewing the content, such as a second client device. The second client device applies the landmark points of the facial expression to a bone structure mapping and a muscle movement mapping to create an expressive avatar having the facial expression for display to a second user. 1. A method of emotional reaction sharing , the method involving a computing device comprising a processor , and the method comprising: responsive to determining that a user is viewing content through a client device, initializing a camera of the client device to capture one or more frames of video of the user;', 'evaluating a first frame of the video to identify a set of facial features of the user;', 'generating a set of landmark points, within the first frame, representing the set of facial features; and', 'sending the set of landmark points to a user reaction distribution service for identifying a facial expression of the user, based upon the set of landmark points, for display through a second client device to a second user., 'executing, on the processor, instructions that cause the computing device to perform operations, the operations comprising2. The method of claim 1 , wherein the set of landmark points comprise coordinates of between about 4 landmark points to about 240 landmark points claim 1 , a landmark point specifying a location of a facial feature.3. The ...

Подробнее
28-01-2021 дата публикации

Spatial difference measurement

Номер: US20210026323A1
Принадлежит: Hamilton Sundstrand Corp

A spatial difference measurement method, can include generating first key features of a first skeleton of a nominal 3D model of an object and extrapolating the first key features onto the nominal 3D model. The method can include creating an actual 3D model of the object during or after a construction process (real or simulated). The method can include generating second key features of a second skeleton of the actual 3D model of the object and extrapolating the second key features onto the actual 3D model of the object. The method can include comparing the first key features extrapolated on the nominal 3D model to the second key features extrapolated on the actual 3D model to determine one or more distances between the first and second key features to measure a spatial difference between the nominal 3D model and the object during or after construction.

Подробнее
02-02-2017 дата публикации

CONTROL OF INTERACTIONS WITHIN VIRTUAL ENVIRONMENTS

Номер: US20170028302A1

A method for restricting the number of consequential interactions to further virtual objects having a relationship with a first virtual object, resulting from an interaction with said first virtual object. The method comprises: defining a maximum number of consequential interactions, counting consequential interactions, and stopping further interaction when the maximum number of consequential interactions is reached. 1. A dedicated control element for controlling the functionality of three-dimensional virtual objects belonging to a set of three-dimensional virtual objects within a distributed three-dimensional virtual environment , said dedicated control element being associated with said environment , and comprising:a set identifier configured to determine whether a three-dimensional virtual object within said distributed three-dimensional virtual environment is a member of said set, anda set controller configured to process events associated with said member of said set, thereby to provide relationship-based control for members of said set.2. The dedicated control element of claim 1 , wherein said set is defined by a physical relationship within said environment.3. The dedicated control element of claim 2 , wherein said physical relationship comprises a one-way physical relationship with a further virtual object.4. The dedicated control element of claim 2 , wherein said physical relationship comprises a two-way physical relationship with a further virtual object.5. The dedicated control element of claim 2 , wherein said physical relationship comprises a one-way physical relationship with another member of said set claim 2 , or wherein said physical relationship comprises a two-way physical relationship with another member of said set.6. The dedicated control element of claim 2 , wherein said physical relationship comprises a mutual movement relationship claim 2 , such that said set is defined as the group of all virtual objects that move when a further virtual ...

Подробнее
23-01-2020 дата публикации

Systems and Methods for Visualizing Garment Fit

Номер: US20200027155A1
Принадлежит:

Systems and methods for visualizing garment fit are provided. In one embodiment, the method can include obtaining garment data descriptive of a garment and body data descriptive of a body. The method can further include simulating a garment deformation of the garment due to contact from the body, and determining a simulating a body deformation of the body due to contact from the garment. The method can further include providing a visualization of the garment on the body for display to a user, the visualization visualizing the garment deformation and the body deformation. 1. A computer-implemented method for visualizing garment fit , the method comprising:obtaining, by one or more computing devices, garment data descriptive of a garment and body data descriptive of a body;simulating, by the one or more computing devices, a garment deformation of the garment due to contact from the body;simulating, by the one or more computing devices, a body deformation of the body due to contact from the garment; andproviding, by the one or more computing devices, a visualization of the garment on the body for display to a user, the visualization visualizing the garment deformation and the body deformation.2. The computer-implemented method of claim 1 , further comprising:preparing, by the one or more computing devices, a garment model of the garment by stitching one or more garment panels along one or more stitch lines or curves; andpreparing, by the one or more computing devices, a body model of the body by discretizing a representation of the body.3. The computer-implemented method of claim 2 , wherein simulating claim 2 , by the one or more computing devices claim 2 , the garment deformation of the garment comprises:expanding, by the one or more computing devices, a spatially compressed representation of the body model to its original size inside the garment model; andsimulating, by the one or more computing devices, a deformation of the garment model using a modified finite ...

Подробнее
28-01-2021 дата публикации

TECHNIQUES FOR LABELING CUBOIDS IN POINT CLOUD DATA

Номер: US20210027546A1
Принадлежит:

Techniques are disclosed for facilitating the labeling of cuboid annotations in point cloud data. User-drawn annotations of cuboids in point cloud data can be automatically adjusted to remove outlier points, add relevant points, and fit the cuboids to points representative of an object. Interpolation and object tracking techniques are also disclosed for propagating cuboids from frames designated as key frames to other frames. In addition, techniques are disclosed for, in response to user adjustment of the size of a cuboid in one frame, automatically adjusting the sizes of cuboids in other frames while anchoring a set of non-occluded faces of the cuboids. The non-occluded faces may be determined as the faces that are closest to a LIDAR (light detection and ranging) sensor in the other frames. 1. A computer-implemented method for facilitating data labeling , the method comprising:receiving a user-specified cuboid annotation associated with a first point cloud in a first frame, wherein the user-specified cuboid annotation comprises a three-dimension cuboid annotation; and identifying, based on the user-specified cuboid annotation, one or more points in the first point cloud representing an object included in the first frame, wherein the one or more points (i) include at least one point of the first point cloud that is outside the user-specified cuboid annotation, and/or (ii) do not include at least one point of the first point cloud that is within the user-specified cuboid annotation, and', 'determining a first adjusted cuboid annotation based on a fit of the first adjusted cuboid annotation to the one or more points representing the object., 'subsequent to receiving the user-specified cuboid annotation, automatically2. The computer-implemented method of claim 1 , wherein determining the first adjusted cuboid annotation includes optimizing a loss function.3. The computer-implemented method of claim 2 , wherein optimizing the loss function includes determining distances ...

Подробнее
17-02-2022 дата публикации

CHARACTERISATION OF CARDIAC DYSSYNCHRONY AND DYSSYNERGY

Номер: US20220047868A1
Автор: ODLAND Hans Henrik
Принадлежит:

A method for identifying reversible cardiac dyssynchrony (RCD) of a patient and treating the RCD measures an event relating to a rapid increase in the rate of pressure increase within the left ventricle. The method calculates a first time delay between the event and a first reference time. If the first time delay is longer than a set fraction of electrical activation of the heart, then the presence of cardiac dyssynchrony in the patient is identified. Pacing is applied to the heart, and a second time delay between the event following pacing and a second reference time following pacing is calculated. If the second time delay is shorter than the first time delay, the method identifies a shortening of a delay to onset of myocardial synergy, OoS, thereby identifying the presence of RCD in the patient. Treatment of the RCD is performed. 122-. (canceled)23. A method of treating reversible cardiac dyssynchrony in a heart of a patient , the method comprising:determining a first onset of synergy by measuring a first time delay between a first reference time and a rapid increase in a pressure within a left ventricle of the heart;measuring a duration of electrical activation in the heart;determining the duration of electrical activation is longer than the first time delay;applying a pacing to the heart;determining a second onset of synergy after the pacing of the heart by measuring a second time delay between a second reference time and a rapid increase in the pressure within the left ventricle of the heart;determining the second time delay is shorter than the first time delay; andapplying a treatment pacing to the heart according to a pacing rhythm based on the second onset of synergy, to treat a cardiac dyssynchrony in the patient.24. The method of claim 23 , wherein measuring the first time delay comprises:receiving data from one or more sensors indicative of an event relating to the rapid increase in a rate of pressure increase within the left ventricle identified in each ...

Подробнее
05-02-2015 дата публикации

3D MODELING MOTION PARAMETERS

Номер: US20150035820A1

According to an example, 3D modeling motion parameters may be simultaneously determined for video frames according to different first and second motion estimation techniques. In response to detecting a failure of the first motion estimation technique, the 3D modeling motion parameters determined according to the second motion estimation technique may be used to re-determine the 3D modeling motion parameters according to the first motion estimation technique. 1. A motion estimation computer to determine 3D modeling motion parameters , the motion estimation computer comprising:at least one processor to:simultaneously determine 3D modeling motion parameters for video frames according to different first and second motion estimation techniques, wherein in response to detecting a failure of the first motion estimation technique for a current frame of the frames, the at least one processor is to re-determine, according to the first motion estimation technique, the 3D modeling motion parameters for the current frame from the motion parameters determined according to the second motion estimation technique for a previous frame of the frames or the current frame; anda data storage to store the determined 3D modeling motion parameters.2. The motion estimation computer of claim 1 , wherein to determine the motion parameters according to the first motion estimation technique claim 1 , the at least one processor is to:determine depth measurements for the current frame of an environment;determine a point cloud from the depth measurements for the current frame;determine a reference point cloud from a 3D model of the environment generated from a previous frame;align the point cloud with the reference point cloud according to an iterative closest point function; anddetermine the 3D modeling motion parameters for the current frame according to the first motion estimation technique according to the alignment.3. The motion estimation computer of claim 2 , wherein the at least one ...

Подробнее
05-02-2015 дата публикации

Medical Image Display Control Apparatus, Medical Image Display Control Method, and Medical Image Display Control Program

Номер: US20150035829A1
Автор: Miyamoto Masaki
Принадлежит:

A functional image generating section that generates a functional image that represents the functions of a subject, based on 3D medical image data obtained by imaging the subject; a projected 3D image generating section that generates a projected 3D image that represents the appearance of the subject, based on the 3D medical image data; a display control section that displays the functional image and the projected 3D image; and a specified position data obtaining section that obtains position data regarding a position specified within the functional image, are provided. The projected 3D image generating section generates the projected 3D image which is projected in a projection direction such that a position within the projected 3D image corresponding to the specified position faces forward, based on the position data. The display control section displays the projected 3D image having the projection direction. 1. A medical image display control apparatus , comprising:a functional image generating section that generates a functional image that represents the functions of a subject, based on three dimensional medical image data obtained by imaging the subject;a projected three dimensional image generating section that generates a projected three dimensional image that represents the appearance of the subject, based on three dimensional medical image data obtained by imaging the subject;a display control section that displays the functional image and the projected three dimensional image on a display section; anda specified position data obtaining section that obtains data regarding a predetermined position which is specified within the functional image which is being displayed on the display section;the projected three dimensional image generating section generating the projected three dimensional image which is projected in a projection direction such that a position within the projected three dimensional image corresponding to the specified predetermined position ...

Подробнее
05-02-2015 дата публикации

CAMERA SYSTEMS AND METHODS FOR USE IN ONE OR MORE AREAS IN A MEDICAL FACILITY

Номер: US20150035942A1
Принадлежит:

A method of monitoring an object during a medical process includes: using one or more cameras to obtain information regarding an actual three dimensional configuration of an object involved in a medical process; obtaining a three-dimensional model of the object representing a geometry of the object; obtaining a movement model of the object; and processing the information, the three-dimensional model, and the movement model to monitor the object during the medical process, wherein the act of processing is performed using a processing unit. 1. A method of monitoring an object during a medical process , comprising:using one or more cameras to obtain information regarding an actual three dimensional configuration of an object involved in a medical process;obtaining a three-dimensional model of the object representing a geometry of the object;obtaining a movement model of the object; andprocessing the information, the three-dimensional model, and the movement model to monitor the object during the medical process, wherein the act of processing is performed using a processing unit.2. The method of claim 1 , wherein the one or more cameras comprise a depth sensing camera.3. The method of claim 1 , wherein the one or more cameras comprise a plurality of cameras claim 1 , and the information comprises a three-dimensional rendering of the object obtained using images from the cameras.4. The method of claim 1 , wherein the act of processing comprises creating an expected three-dimensional configuration of the object for a given time using the three-dimensional model and the movement model of the object.5. The method of claim 4 , wherein the act of processing further comprises comparing the expected three-dimensional configuration of the object with a three-dimensional rendering of the object obtained using images from the one or more cameras.6. The method of claim 1 , wherein the movement model indicates a degree of freedom claim 1 , a trajectory claim 1 , or both claim 1 , ...

Подробнее
01-02-2018 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20180033195A1
Принадлежит:

[Object] To provide an information processing apparatus, information processing method, and program capable of providing a more intuitive operational environment of 3DCG application. 1. An information processing apparatus comprising:a generation unit configured to generate display control information for displaying a virtual space on a basis of first operation information detected about a first real object corresponding to a virtual object in the virtual space and second operation information detected about a second real object corresponding to a virtual tool in the virtual space.2. The information processing apparatus according to claim 1 ,wherein the first operation information includes information indicating position and posture of the first real object, and the second operation information includes information indicating position and posture of the second real object.3. The information processing apparatus according to claim 2 ,wherein the generation unit generates the display control information for projection mapping to the first real object.4. The information processing apparatus according to claim 3 ,wherein the generation unit generates the display control information for controlling an image projected from a projector on a basis of a recognition result of a three-dimensional shape of the first real object.5. The information processing apparatus according to claim 3 ,wherein the generation unit generates the display control information for performing display in accordance with a texture projection-mapped to the first real object.6. The information processing apparatus according to claim 2 ,wherein the generation unit generates the display control information reflecting a parameter related to a working of the virtual tool.7. The information processing apparatus according to claim 6 ,wherein the generation unit generates the display control information for displaying the parameter at a place related to the second real object.8. The information processing ...

Подробнее
01-02-2018 дата публикации

METHOD OF GRAPHICAL MANAGEMENT OF THE SYMBOLOGY IN A THREE-DIMENSIONAL SYNTHETIC VIEW OF THE EXTERIOR LANDSCAPE IN AN ON-BOARD VIEWING SYSTEM FOR AN AIRCRAFT

Номер: US20180033207A1
Принадлежит:

A method for managing a symbology in an on-board viewing system for an aircraft, the graphical representation comprising the piloting and navigation symbology overlaid on a representation of the exterior landscape, the symbology comprises a first angular attitude scale comprising a first symbol called an aircraft mockup, a second speed scale, a third altitude scale and a second symbol called the speed vector. When the angular lateral distance between the position of the first symbol and the position of the second symbol is such that the second symbol is not overlaid on the lateral scales, the various scales are represented in the nominal position; when the angular lateral distance between the position of the first symbol and the position of the second symbol is such that the second symbol is overlaid on one of the scales, the various scales move and/or their size decreases. 1. A method of graphical management of a symbology in a three-dimensional synthetic view of the exterior landscape displayed in an on-board viewing system for an aircraft , the said viewing system comprising a graphical calculator ensuring the graphical management of the symbols and a viewing screen , the graphical representation displayed on the said viewing screen and comprising the symbology representative of the information items for piloting and for navigating the said aircraft which are overlaid on a three-dimensional synthetic representation of the exterior landscape , the said symbology essentially comprising a first angular attitude scale comprising a first symbol called an aircraft mockup represented in conformal position , a second vertical speed scale , a third vertical altitude scale , a second symbol called the speed vector represented in conformal position , wherein:when the angular lateral distance between the position of the first symbol and the position of the second symbol is such that the second symbol is overlaid neither on the second scale nor on the third scale, the first ...

Подробнее
17-02-2022 дата публикации

SENSOR CALIBRATION VIA EXTRINSIC SCANNING

Номер: US20220050934A1
Принадлежит:

The subject technology provides solutions for performing extrinsic sensor calibration for vehicle sensors, such as environmental sensors deployed in an autonomous vehicle (AV) context. In some aspects, the disclosed technology relates to a sensor localization system that is configured to: perform a scan, using a 3D scanner, to collect surface data associated with an autonomous vehicle (AV), analyze the surface data to identify a coordinate origin of the AV, and calculate a position of at least one or more AV sensors based on the surface data. Methods and computer-readable media are also provided. 1. A sensor localization system , comprising:one or more processors;a three-dimensional (3D) scanner coupled to the one or more processors; and performing a scan, using the 3D scanner, to collect surface data associated with an autonomous vehicle (AV);', 'analyzing the surface data to identify a coordinate origin of the AV; and', 'calculating a position of at least one or more AV sensors based on the surface data by determining a coordinate location of the at least one of the one or more AV sensors based on the coordinate origin of the AV., 'a computer-readable medium coupled to the processors, the computer-readable medium comprising instructions stored therein, which when executed by the processors, cause the processors to perform operations comprising2. (canceled)3. The sensor localization system of claim 1 , wherein analyzing the surface data to identify the coordinate origin for the AV further comprises:automatically identifying one or more or more known vehicle geometries.4. The sensor localization system of claim 1 , wherein performing the scan claim 1 , further comprises:receiving a plurality of images of the AV surface; andstitching the plurality of images to generate a three-dimensional image of the AV surface.5. The sensor localization system of claim 1 , wherein the surface data further includes computer aided design (CAD) modeling information for the one or more ...

Подробнее
17-02-2022 дата публикации

SYSTEM AND METHOD FOR LOCATION DETERMINATION USING A MIXED REALITY DEVICE AND A 3D SPATIAL MAPPING CAMERA

Номер: US20220051483A1
Принадлежит:

A system and method for determining a location for a surgical jig in a surgical procedure includes providing a mixed reality headset, a 3D spatial mapping camera, and a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera. The system and method also include attaching a jig to a bone, mapping the bone and jig using the 3D spatial mapping camera, and then identifying a location for the surgical procedure using the computer system. Then the system and method use the mixed reality headset to provide a visualization of the location for the surgical procedure. 1. A method for determining a location for a surgical jig in a surgical procedure , comprising:providing a mixed reality headset;providing a 3D spatial mapping camera;providing a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera;attaching a jig to a bone;mapping the bone and jig using the 3D spatial mapping camera;identifying a location for the surgical procedure using the computer system; andusing the mixed reality headset to provide a visualization of the location for the surgical procedure.2. The method of claim 1 , further comprising:using the mixed reality headset to provide a virtual jig and a virtual bone, which are representations of the jig and bone; andmanipulating the location of the virtual jig as viewed in the mixed reality headset, with respect to the virtual bone.3. The method of claim 2 , further comprising:fixing the location of the virtual jig as viewed in the mixed reality headset, with respect to the virtual bone; andmanipulating the location of the jig with respect to the bone, to substantially match the location of the virtual jig with respect to the virtual bone.4. The method of claim 1 , wherein the jig includes a visual label claim 1 , wherein the visual label if configured to be scanned by the mixed reality headset to provide data relating to the jig.5. The method ...

Подробнее
17-02-2022 дата публикации

METHOD AND SYSTEM OF ESTABLISHING DENTURE DENTITION MODEL

Номер: US20220051486A1
Автор: LIN Wei-Po
Принадлежит:

A method for establishing a denture dentition model includes: obtaining face information and teeth alignment information of a use, to establish a three-dimensional mouth-opening face model and an original teeth model, and superimposing the original teeth model to the three-dimensional mouth-opening face model; establishing a reference line corresponding to each of the teeth of the original teeth model; generating a plurality of grids on the three-dimensional mouth-opening face model according to the reference lines, and adjusting each of the grids to correspond to the edge of each tooth, to obtain an actual size of each of the teeth; providing an upper edge curve and a lower edge curve on the three-dimensional mouth-opening face model as a smile curve; aligning each grid with the smile curve, and placing a denture model in each grid; and adjusting a denture contour of each denture model, to generate the denture dentition model. 1. A method for establishing a denture dentition model , comprising:obtaining face information of a user, to establish a three-dimensional mouth-opening face model;obtaining teeth alignment information of the user, to establish an original teeth model, and superimposing the original teeth model to the three-dimensional mouth-opening face model;establishing a reference line corresponding to each of a plurality of teeth of the original teeth model;generating a plurality of grids on the three-dimensional mouth-opening face model according to the reference lines, and adjusting each of the grids to correspond to the edge of each tooth, so as to obtain an actual size of each of the teeth;providing an upper edge curve and a lower edge curve on the three-dimensional mouth-opening face model as a smile curve;aligning each of the grids with the smile curve, and placing a denture model in each of the grids; andadjusting a denture contour of each of the denture models, to generate the denture dentition model.2. The method for establishing a denture ...

Подробнее
04-02-2021 дата публикации

METHODS AND SYSTEMS FOR IDENTIFYING MATERIAL COMPOSITION OF MOVING OBJECTS

Номер: US20210033533A1
Автор: Buchter Scott
Принадлежит: Outsight

A method for identifying a composition material of an object located in an environment surrounding at least one device, the object moving relative to the device, in which at least one sensor is mounted on the device and communicates with at least one central processing unit. 1. A method for identifying a composition material of an object located in an environment surrounding at least one device , the object moving relative to the device , in which at least one sensor is mounted on the device and communicates with at least one central processing unit , wherein:{'sub': 'j', '/A/ the sensor generates a point cloud frame of a continuous stream by emitting a physical signal at a first wavelength, wherein the point cloud frame comprises a set of data points, at least one data point comprising coordinates of the object in a local volume surrounding the sensor at time tin a local coordinate system of the sensor, said data point also comprising an intensity value of a reflected physical signal corresponding to the emitted physical signal once reflected on the object,'}/B/ the central processing unit receives the point cloud frame and determines the coordinates of each data point of the point cloud frame in a global coordinate system of the environment surrounding the device, the intensity value being associated with the coordinates of each data point in the global coordinate system,{'sub': 'j', '/C/ the central processing unit determine the coordinates of the object at time tin a global coordinate system,'}{'sub': 'j', '/D/ the central processing unit stores in a memory the coordinates of the object in the global coordinate system at time t,'}{'sub': 'j+1', 'steps /A/ to /D/ are repeated with the sensor or another sensor generating another point cloud frame by emitting another physical signal at another wavelength, at time t, so that at least two intensity values are associated to coordinates of the object in the global coordinate system at two different times,'}/E/ the ...

Подробнее
31-01-2019 дата публикации

METHOD AND SYSTEM FOR SURGICAL PLANNING IN A MIXED REALITY ENVIRONMENT

Номер: US20190035156A1
Принадлежит:

The present teaching relates to method and system for aligning a virtual anatomic model. The method generates a virtual model of an organ of a patient, wherein the virtual model includes at least three virtual markers. A number of virtual spheres equal to a the number of virtual markers are generated, wherein the virtual spheres are disposed on the virtual model of the organ of the patient and associated with the virtual markers. A first position of the virtual spheres and the virtual markers is recorded. The virtual spheres are placed to coincide with physical markers disposed on the patient and a second position of the virtual spheres is recorded. A transformation of the virtual spheres and the virtual markers based on the first and second positions is computed and the virtual model of the organ is aligned with the patient based on the computed transformation.

Подробнее
04-02-2021 дата публикации

SIX DEGREE OF FREEDOM TRACKING WITH SCALE RECOVERY AND OBSTACLE AVOIDANCE

Номер: US20210034144A1
Принадлежит:

A virtual reality or mixed reality system configured to preform object detection using a monocular camera. The system configured to make the user aware of the detected objects by showing edges or lines of the object within a virtual scene. Thus, the user the user is able to avoid injury or collision while immersed in the virtual scene. In some cases, the system may also detect and correct for drift in the six degree of freedom pose of the user using corrections based on the current motion of the users. 1. A method comprising:detecting an edge of a physical object within a physical environment from one or more images captured by a virtual reality or mixed reality system;determining a six degree of freedom pose associated with a user;determining the user is within a threshold distance from the physical object; anddisplaying the edge of the physical object within a virtual scene being presented to the user.2. The method as recited in claim 1 , wherein detecting the edge of the physical object includes:detecting a first line segment based on a first color gradient within the one or more images;detecting a second line segment based on a second color gradient within the one or more images;merging the first line segment to the second line segment into the edge based at least in part on a similarly of the first color gradient to the second color gradient; andlocating the edge within a three-dimensional model of the physical environment.3. The method as recited in claim 2 , wherein detecting the edge of the physical object further comprises adjusting location of the edge within the three-dimensional model.4. The method as recited in claim 1 , wherein detecting the edge of the physical object includes:detecting a first edgelet from the one or more images;detecting a second edgelet from the one or more images;detecting a continuous gradient between the first edgelet and the second edgelet;joining the first edgelet with the second edgelet into a joined edgelet based at least in ...

Подробнее
30-01-2020 дата публикации

MARKER-BASED AUGMENTED REALITY AUTHORING TOOLS

Номер: US20200035033A1
Принадлежит: NANT HOLDINGS IP, LLC

An augmented reality-based content authoring tool is presented. A content author arranges machine-recognizable markers in a physical environment. A computing device operating as the authoring tool recognizes the markers and their arrangement based on a captured digital representation of the physical environment. Once recognized, augmented reality primitives corresponding to the markers can be bound together via their primitive interfaces to give rise to a content set. The individual primitives and content set are instantiated based on the nature of the marker's arrangement. 127-. (canceled)28. A content authoring tool comprising:at least one processor;at least one non-transitory computer readable memory for storing software instructions executable by said at least one processor to execute operations comprising:receiving a digital representation of a marker set comprising a plurality of markers in a physical environment;identifying at least one marker's identity in the marker set; andidentifying at least one inter-connectable object primitive for the at least one marker in the marker set from the at least one marker's identity;applying a content creation rule from available rules based on a device context;obtaining a set of content primitives comprising the at least one inter-connectable object primitive for the at least one marker in the marker set, each object primitive including a primitive interface;generating a content set by positionally coupling at least two primitives in the set of content primitives via their primitive interfaces according to the content creation rule and a physical environment of the at least one marker corresponding to the at least two primitives; andcausing a device to present the content set on a display.29. The tool of claim 28 , wherein markers in the marker set comprise physical cards.30. The tool of claim 28 , wherein markers in the marker set comprise three dimensional objects.31. The tool of claim 28 , wherein markers in the marker ...

Подробнее
30-01-2020 дата публикации

Virtual display method, device, electronic apparatus and computer readable storage medium

Номер: US20200035037A1
Автор: Ran Wang
Принадлежит: BOE Technology Group Co Ltd

A virtual display method, device, electronic apparatus, and computer readable storage medium are provided. The virtual display method includes: obtaining a first image including information on a first target which includes at least one of a shoe, a piece of clothes, and an accessory; extracting the information on the first target from the first image to generate a second image; and photographing a second target with the second image as a foreground to obtain and display a third image including the information on the first target and the information on the second target which includes a human body.

Подробнее
30-01-2020 дата публикации

An Interactive Implementation Method for Mobile Terminal Display of 3D Model

Номер: US20200035038A1
Автор: LI Tao, XIA Yuxiang
Принадлежит:

An interactive implementation method for mobile terminal display of 3D model, comprising the following steps of: 1. acquiring 3D model data from the data center, then optimizing the 3D model for reduction and putting the 3D model into the specified folder; 2. checking if the current environment is correct. If not, setting the environment; 3. importing into different layer areas according to different categories of 3D models; 4. creating a Point Light for each area, and naming the Point Light by which space to enter; 5. conducting material adaptation, generating a normal map for the 3D model and placing normal map in the normal map channel; 6. determining the mapping specification and mapping the 3D model in the import file; 7. importing the model upload background page into U3D, selecting the 3D model to be exported, calling up the corresponding upload option, generating Prefabs file from the 3D model and then uploading. The advantage of the method is the realization of the mobile terminal display of 3D model with simple process and better display effect. 1. An interactive implementation method for mobile terminal display of 3D model , is characterized by the following steps:I. Acquiring 3D model data from the data center, then optimizing the 3D model for reduction and putting the model category into the designated folder according to the model type;II. Checking if the current environment is correct. If not correct, setting the environment;III. Importing into different layer areas according to different categories of 3D models;IV. Creating a Point Light for each area, and naming the Point Light by which space to enter;V. Conducting material adaptation, firstly creating a [material library] and loading the [material library] to the project file; the Material in the imported FBX file adapts the shader and related parameters used by the Material with same name in the [material library]. Normal map for the 3D model is generated and put into the normal map channel;VI. ...

Подробнее
31-01-2019 дата публикации

SYSTEMS AND METHODS FOR VIDEO ANALYSIS RULES BASED ON MAP DATA

Номер: US20190037179A1
Принадлежит:

Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user. 1. A computer implemented method to generate video analysis rules , comprising:displaying, on a map based user interface, a map of a geographic area, at least a portion of the geographic area covered by a video sensor;accepting, via the map based user interface, a rule-representing feature that is placed in a specific location on the map by a user;generating a video analysis rule based on the rule representing feature and the specific location on the map, wherein the video analysis rule is expressed in a geo-registered map spacereceiving video from the video sensor; andapplying the video analysis rule to the video from the video sensor to detect an event.2. The method of claim 1 , further comprising notifying a generating a notification on detection of the event.3. The method of claim 1 , wherein the rule-representing feature is a tripwire.4. The method of claim 1 , wherein the rule-representing feature is an area of interest.5. The method of claim 1 , wherein the geo-registered map space is defined in coordinates that are latitude and longitude coordinates.6. The method of claim 1 , wherein the video provided by the sensor ...

Подробнее
04-02-2021 дата публикации

IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20210035362A1
Принадлежит:

An image processing method and apparatus, and a computer-readable storage medium are provided. The method includes: determining a first region matching a target object in a first image; determining a deformation parameter based on a preset deformation effect, the deformation parameter being used for determining a position deviation, generated based on the preset deformation effect, of each pixel point of the target object; and performing deformation processing on the target object in the first image based on the deformation parameter to obtain a second image. 1. An image processing method , comprising:determining a first region matching a target object in a first image;determining a deformation parameter based on a preset deformation effect, the deformation parameter being used for determining a position deviation, generated based on the preset deformation effect, for each pixel point of the target object; andperforming deformation processing on the target object in the first image based on the deformation parameter to obtain a second image.2. The method according to claim 1 , wherein determining the first region matching the target object in the first image comprises:forming a first mesh corresponding to the target object, the first mesh matching the first region.3. The method according to claim 1 , wherein the deformation parameter is a deformed pixel matrix claim 1 , and each parameter in the deformed pixel matrix is used for determining a position deviation claim 1 , generated based on the preset deformation effect claim 1 , for a corresponding pixel point of the target object.4. The method according to claim 1 , wherein determining the first region matching the target object in the first image comprises:determining positions of feature points of the target object in the first image; anddetermining the first region based on relative positions between the feature points.5. The method according to claim 1 , wherein determining the deformation parameter based on ...

Подробнее