Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 4272. Отображено 100.
05-01-2012 дата публикации

Method for generating shadows in an image

Номер: US20120001911A1
Принадлежит: Thomson Licensing SAS

As to generate shadows in an image, the method comprises the steps of: Computing a depth-map that comprises an array of pixels, wherein pixels in the depth-map are associated to a single value corresponding to depth value that indicates a depth from a light source to a portion of nearest occluding object visible through the pixel, projecting a point visible through a pixel of said image into a light space, the result of said projection being a pixel of said depth-map, calculating a distance between the said visible point and the light source, fetching the depth value associated to said pixel of depth-map, computing, for said pixel of said image, an adaptive bias as a function of a predetermined base bias and a relationship between the normal of a surface on which the said visible point is located and incident light direction at said visible point, comparing for said pixel in the image, the distance between said visible point and the light source with the sum of the corresponding depth map value and said adaptive bias, labelling said point visible through said pixel as lit or shadowed according to said comparison.

Подробнее
03-05-2012 дата публикации

Image Viewing Application And Method For Orientationally Sensitive Display Devices

Номер: US20120105436A1
Автор: Dorian Averbuch
Принадлежит: SuperDimension Ltd

A system and method for presenting three-dimensional image volume data utilizing an orientationally-sensitive display device whereby the image volume is navigable simply by tilting, raising and lowering the display device. Doing so presents an image on the screen that relates to the angle and position of the display device such that the user gets the impression that the device itself is useable as a window into the image volume, especially when the device is placed on or near the source of the image data, such as a patient.

Подробнее
04-10-2012 дата публикации

System for the rendering of shared digital interfaces relative to each user's point of view

Номер: US20120249591A1
Принадлежит: Qualcomm Inc

A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include capturing an image and spatial data with a body mounted camera and sensor array, receiving input indicating a first anchor surface, calculating parameters with respect to the body mounted camera and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface. Further rendering operations may include receiving a second input indicating a second anchor surface within the captured image that is different from the first anchor surface, calculating parameters with respect to the second anchor surface and displaying the virtual object such that the virtual object appears anchored to the selected second anchor surface and moved from the first anchor surface.

Подробнее
03-01-2013 дата публикации

Ray tracing system architectures and methods

Номер: US20130002672A1

Aspects comprise systems implementing 3-D graphics processing functionality in a multiprocessing system. Control flow structures are used in scheduling instances of computation in the multiprocessing system, where different points in the control flow structure serve as points where deferral of some instances of computation can be performed in favor of scheduling other instances of computation. In some examples, the control flow structure identifies particular tasks, such as intersection testing of a particular portion of an acceleration structure, and a particular element of shading code. In some examples, the aspects are used in 3-D graphics processing systems that can perform ray tracing based rendering.

Подробнее
15-08-2013 дата публикации

Routing virtual area based communications

Номер: US20130212228A1
Принадлежит: Social Communications Co

In association with a virtual area, a first network connection is established with a first network node present in the virtual area and a second network connection is established with a second network node present in the virtual area. Based on stream routing instructions, a stream router is created between the first network node and the second network node. The stream router includes a directed graph of processing elements operable to receive network data, process the received network data, and output the processed network data. On the first network connection, an input data stream derived from output data generated by the first network node is received in association with the virtual area. The input data stream is processed through the stream router to produce an output data stream. On the second network connection, the output data stream is sent to the second network node.

Подробнее
24-10-2013 дата публикации

Techniques for enhancing multiple view performance in a three dimensional pipeline

Номер: US20130278599A1
Автор: Lili Gong, Xianchao Xu
Принадлежит: Intel Corp

Techniques may be directed to enhancing multiple view performance in a three dimensional pipeline. A plurality of view transformations associated with an image may be received. The vertex data associated with the image may be received. Operation data may be determined by performing the view transformations on the compiled vertex data. A plurality of display lists may be determined through a single run of a vertex pipeline. A display list may be based on the operation data. Other embodiments are described and claimed.

Подробнее
28-11-2013 дата публикации

Method for creating a naked-eye 3d effect

Номер: US20130314406A1
Автор: Ching-Fuh Lin
Принадлежит: National Taiwan University NTU

The present invention relates to a method for creating a naked-eye effect, and particularly relates to a method for creating a naked-eye effect without requiring a display hologram, special optical film, or 3D glasses. This method includes following steps: (1) detecting rotating angle or moving position of a portable device by a detecting unit; (2) creating a new image of an object shown in a display according to the rotating angle or moving position of the portable device by an image processing unit; and (3) displaying the new image of the object in the display instead of the original image of the object. By this method, a different image of the same object with different visual angles is displayed at different times, and it lets the brain of a person consider that the image of the object is a 3D image. Therefore, a naked-eye effect can be created.

Подробнее
05-12-2013 дата публикации

Sensor-enhanced localization in virtual and physical environments

Номер: US20130321391A1
Принадлежит: Boeing Co

In one embodiment, a computer-based system comprises a measurement device, a display, a processor, and logic instructions stored in a tangible computer-readable medium coupled to the processor which, when executed by the processor, configure the processor to determine a position and orientation in a real three dimensional space of the measurement device relative to at least one real object in the three dimensional space and render on the display, a perspective view of a virtual image of a virtual object corresponding to the real object in a virtual three-dimensional space, wherein the perspective view of the virtual object corresponds to the perspective view of the real object from the position of the measurement device.

Подробнее
02-01-2014 дата публикации

Saving augmented realities

Номер: US20140002490A1
Автор: Hugh TEEGAN
Принадлежит: Individual

Saving augmented realities includes collecting, with an augmented reality device, observation information of a physical space including an object, and obtaining, with the augmented reality device, an augmentation associated with the object. An augmented view of the physical space including a visual representation of the augmentation is visually presented with the augmented reality device, and the augmented view is saved for subsequent playback.

Подробнее
02-01-2014 дата публикации

Portable proprioceptive peripatetic polylinear video player

Номер: US20140002581A1
Принадлежит: MONKEYmedia Inc

Departing from one-way linear cinema played on a single rectangular screen, this multi-channel virtual environment involves a cinematic paradigm that undoes habitual ways of framing things, employing architectural concepts in a polylinear video/sound construction to create a type of experience that allows the world to reveal itself and permits discovery on the part of participants. Techniques are disclosed for peripatetic navigation through virtual space with a handheld computing device, leveraging human spatial memory to form a proprioceptive sense of location, allowing a participant to easily navigate amongst a plurality of simultaneously playing videos and to center in front of individual video panes in said space, making it comfortable for a participant to rest in a fixed posture and orientation while selectively viewing any one of the video streams, and providing spatialized 3D audio cues that invite awareness of other content unfolding simultaneously in the virtual environment.

Подробнее
01-01-2015 дата публикации

SPACE CARVING BASED ON HUMAN PHYSICAL DATA

Номер: US20150002507A1
Принадлежит:

Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images. 1. A method for three dimensional (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system comprising:identifying by one or more processors one or more navigable paths traversed by one or more users wearing the NED system in a user environment based on sensor data from one or more sensors on the near-eye display (NED) system;merging overlapping portions of the one or more navigable paths traversed by the one or more users; andstoring position and spatial dimensions for the one or more navigable paths as carved out space in human space carving data in a 3D space carving model of the user environment.2. The method of further comprising retrieving a stored 3D mapping of the user environment and relating positions of the carved out space to the retrieved 3D mapping.3. The method of further comprising generating a 3D space carved mapping of the user environment.4. The method of wherein generating a 3D space carved mapping of the user environment further comprises:detecting one or more object boundaries by the one or more processors by distinguishing carved out space and uncarved space based on the human space ...

Подробнее
13-01-2022 дата публикации

SYSTEMS AND METHODS FOR RAILWAY ASSET MSANAGEMENT

Номер: US20220009535A1
Автор: Myers Brad A., WEINER Evan
Принадлежит:

Systems and methods for railway asset management. The methods comprise: using a virtual reality device to recognize and collect real world information about railway assets located in a railyard; and using the real world information to (i) associate a railway asset to a data collection unit, (ii) provide an individual with an augmented reality experience associated with the railyard and/or (iii) facilitate automated railyard management tasks. 1. A method for railway asset management , comprising:capturing an image of the railway asset using a mobile communication device;converting the image into an electronic editable image for the railway asset;communicating the electronic editable image from the mobile communication device to a data collection unit which is installed on the railway asset;communicating first information from the data collection unit to a remote computing device via a first network communication, the first information comprising at least the electronic editable image;comparing the first information to second information to determine whether a match exists therebetween by a given amount; andvalidating that the data collection unit was installed on the railway asset when a match is determined to exist between the first and second information by the given amount.2. The method according to claim 1 , further comprising communicating the second information from the mobile communication device to the remote computing device with a second network communication claim 1 , the second information comprising at least the image.3. The method according to claim 1 , wherein the second information is pre-stored information retrieved from a datastore of a railway asset management system or a datastore of another system.4. The method according to claim 1 , further comprising providing an electronic notification to a user of a computing device that the install was completed successfully claim 1 , when a match is determined to exist between the first and second ...

Подробнее
02-01-2020 дата публикации

Method of simulating the rigging of a space

Номер: US20200004222A1
Автор: Jérôme Stubler
Принадлежит: Vinci Construction SAS

A method for simulating the planning of a space using ornamental elements, in particular elements cut and/or machined from a sheet material (P1, Pn) and/or produced using said sheet or sheets (P1, Pn), in particular a material having random or special patterns, this method including the step of allowing a user, using a simulation tool, to simulate an installation configuration in which the ornamental elements are projected to scale onto an in particular 2D or 3D digital mockup of the space to be planned, at a desired position and with a desired orientation, the ornamental elements being displayed to scale with the mockup during this simulation, with their true appearance as resulting from a prior digital acquisition of the ornamental elements or of the or said sheets (P1, Pn) by an acquisition means.

Подробнее
05-01-2017 дата публикации

MIXED THREE DIMENSIONAL SCENE RECONSTRUCTION FROM PLURAL SURFACE MODELS

Номер: US20170004649A1
Принадлежит:

A three-dimensional (3D) scene is computationally reconstructed using a combination of plural modeling techniques. Point clouds representing an object in the 3D scene are generated by different modeled techniques and each point is encoded with a confidence value which reflects a degree of accuracy in describing the surface of the object in the 3D scene based on strengths and weaknesses of each modeling technique. The point clouds are merged in which a point for each location on the object is selected according to the modeling technique that provides the highest confidence. 1. A method of modeling a three-dimensional object from plural image data sources , the method comprising:providing a first point cloud including a first plurality of points defined in space, the first plurality of points being derived from a first one or more images of the object, the first one or more images being of a first image type, each point in the first plurality representing a location on a surface of the three-dimensional object, and each point in the first plurality having a first confidence value on a first confidence scale;providing a second point cloud including a second plurality of points defined in space, the second plurality of points being derived from a second one or more images of the object, the second one or more images being of a second image type, each point in the second plurality representing a location on the surface of the three-dimensional object, and each point in the second plurality having a second confidence value on a second confidence scale; normalizing one of the first or second confidence scales with the respective second or first confidence scale; and', 'for each location of the object for which a corresponding point exists in both the first point cloud and the second point cloud, selecting the point for inclusion in the merged point cloud from either the first point cloud or the second point cloud having a greater first or second normalized confidence value ...

Подробнее
05-01-2017 дата публикации

Augmented Reality for Wireless Mobile Devices

Номер: US20170004658A1
Автор: Hammond John B.
Принадлежит:

A model includes model layers on which a wireframe representation of objects located at geographic coordinates is stored in a memory such that surface detail of the objects increases from a base model layer to an uppermost model layer. Digital data layers stored in the memory that encompass digital coordinates corresponding with the geographic coordinates. Digital content for augmenting scenes is stored on pre-selected digital data layers at pre-selected digital coordinates on those layers. One or more of the digital data layers are logically linked with one or more of the model layers. When the location and spatial orientation of a mobile device in which a scene is viewed is received, the digital content on the digital data layer logically linked to one of the model layers is transmitted over a wireless communication channel to the mobile device. 1. A method for reality augmentation comprising: a model representing objects located at corresponding geographic coordinates of a geographic coordinate system, the model comprising a data structure of model layers in which a wireframe representation of the objects is represented in a base model layer and in which surface detail of the objects increases from the base model layer to an uppermost model layer;', 'a data structure having a plurality of digital data layers encompassing digital coordinates of a digital coordinate system corresponding with respective geographic coordinates of the geographic coordinate system; and', 'digital content on pre-selected one or more of the digital data layers at pre-selected digital coordinates thereon, the digital content comprising data for augmenting scenes containing the objects at the respective locations;, 'storing, in a memory devicelogically linking one or more of the digital data layers with one or more of the model layers;receiving the location and spatial orientation of a mobile device in which a scene is viewed by the mobile device; andtransmitting, over a wireless ...

Подробнее
05-01-2017 дата публикации

METHOD AND APPARATUS FOR FREEFORM CUTTING OF DIGITAL THREE DIMENSIONAL STRUCTURES

Номер: US20170004659A1
Принадлежит:

A method of editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time is disclosed, wherein the structure and one or more texture are processed and output same in a user interface, and user input is read in the user interface and processed into a cut shape of the three-dimensional structure. A simplified structure is generated based on the three-dimensional structure, and points of the cut shape are associated with the simplified structure to generate a curve. Points of the curve corresponding to edges of the curve on the simplified structure are determined, and geometrical characteristics and texture coordinates of the new points calculated. A new three dimensional structure is generated along the curve and layers of the structure are joined, for the cut and layered structure to be rendered in the user interface. An apparatus embodying the method is also disclosed. 1. An apparatus for editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time , comprising:a. data storage means adapted to store the digital three-dimensional structure and the one or more two-dimensional texture ; process the stored digital three-dimensional structure and the one or more two-dimensional texture and output same in a user interface,', 'read user input in the user interface and process user input data into a cut shape of the three-dimensional structure,', 'generate a simplified structure based on the three-dimensional structure, associate points of the cut shape with the simplified structure to generate a curve,', 'determine new points of the curve corresponding to edges of the curve on the simplified structure, calculate geometrical characteristics and texture coordinates of the new points, and', 'generate a new three dimensional structure along the curve and join layers of the structure ; and, 'b. data processing means adapted to'}c. display means for displaying the user interface.2 ...

Подробнее
07-01-2016 дата публикации

METHOD AND DEVICE FOR ENRICHING THE CONTENT OF A DEPTH MAP

Номер: US20160005213A1
Принадлежит:

A method and device for enriching the content associated with a first element of a depth map, the depth map being associated with a scene according to a point of view. Thereafter, at least a first information representative of a variation of depth in the first element in the space of the depth map is stored into the depth map. 115-. (canceled)16. A method for generating a depth map associated with a scene , wherein depth information being associated with each first element of a plurality of first elements of the depth map , the method comprising storing at least a first information in the depth map in addition to the depth information , said at least a first information being associated with said each first element and representative of a variation of depth in said each first element in the space of the depth map.17. The method according to claim 16 , wherein the at least a first information is established from a single surface element of the scene.18. The method according to claim 17 , wherein the at least a first information is established from said depth information associated with said each first element and from depth information associated with at least a second element claim 17 , said each first element and the at least a second element belonging to said single surface element of the scene projected into the depth map.19. The method according to claim 18 , wherein said each first element and the at least a second element are adjacent.20. The method according to claim 18 , wherein the at least a first information is established by computing the ratio of the difference between the depth information associated with said each first element and the depth information associated with the at least a second element to the distance between said each first element and the at least a second element.21. The method according to claim 17 , wherein the at least a first information is established from an equation of said single surface element of the scene projected into the ...

Подробнее
07-01-2016 дата публикации

Three-Dimensional Layered Map

Номер: US20160005223A1
Автор: Gaiter Felix R.
Принадлежит:

A map having surfaces that are depicted at different levels that are not related to topography, with boundaries between the surfaces, where the boundaries are disposed at travel ways. The travel ways form cliff faces in the map between the surfaces, with information items disposed on the cliff faces at positions corresponding to items of interest at locations along the travel ways where the information items are disposed. 1. A map of a geographical area , the map having surfaces that are depicted at different levels that are not related to topography of the geographical area , with boundaries between the surfaces , where the boundaries are disposed at travel ways , the travel ways forming cliff faces in the map between the surfaces , with information items disposed on the cliff faces at positions corresponding to items of interest at locations along the travel ways where the information items are disposed.2. The map of claim 1 , wherein the map is a two-dimensional representation of a three-dimensional structure.3. The map of claim 1 , wherein the map is formed as a three-dimensional structure.4. The map of claim 1 , wherein the travel ways include at least one of a road claim 1 , path claim 1 , trail claim 1 , waterway claim 1 , walkway claim 1 , bus route claim 1 , and railway.5. The map of claim 1 , wherein the items of interest include at least one of a business claim 1 , traffic condition claim 1 , travel way condition claim 1 , weather condition claim 1 , construction claim 1 , transit schedule claim 1 , toll claim 1 , fare claim 1 , and scenic information at the corresponding position along the travel ways.6. The map of claim 1 , wherein the information items include at least one of text claim 1 , audio claim 1 , video claim 1 , animation claim 1 , fixed graphic claim 1 , and icon.7. The map of claim 1 , wherein the cliff face is divided into rows and columns of information items.8. The map of claim 1 , further comprising a valence disposed along the cliff ...

Подробнее
02-01-2020 дата публикации

IDENTIFYING TEMPORAL CHANGES OF INDUSTRIAL OBJECTS BY MATCHING IMAGES

Номер: US20200005077A1
Принадлежит:

Technology for matching images (for example, video images, still images) of an identical infrastructure object (for example, a tower component of a tower supporting power lines) for purposes of comparing the infrastructure object to itself at different points in time to detect a potential anomaly and the potential need for maintenance of the infrastructure object. In some embodiments, this matching of images is done using creation of a three dimensional (#D) computer model of the infrastructure object and by tagging captured images with location on the 3D model across multiple videos taken at different points in time. 1. A computer-implemented method comprising:receiving a plurality of initial version infrastructure object images, with each initial version infrastructure object image of the plurality showing the same infrastructure object, and with all initial version infrastructure object images being characterized by, at least approximately, parallel viewing vectors;adjusting, by machine logic, at least one initial version infrastructure object image to obtain a plurality of adjusted infrastructure image objects respectively corresponding to the plurality of initial version infrastructure object images, with the plurality of adjusted infrastructure object images showing the same infrastructure object aligned with itself across the plurality of adjusted infrastructure object images;comparing, by machine logic, the adjusted infrastructure object images with each other to determine a difference data set corresponding to a set of differences between at least two of the plurality of adjusted infrastructure object images; andanalyzing, by machine logic, the difference data set to determine that a potential maintenance condition exists regarding the infrastructure object shown in all of the plurality of initial version infrastructure images.2. The method of further comprising:responsive to the determination of the existence of the potential maintenance condition, sending ...

Подробнее
04-01-2018 дата публикации

ESTIMATION OF 3D POINT CANDIDATES FROM A LOCATION IN A SINGLE IMAGE

Номер: US20180005399A1
Принадлежит: Intel Corporation

An apparatus for an electronic measurement using a single image is described herein. The apparatus includes a surface fitting mechanism that is to estimate the analytical model of a surface on which lies the point of the single image and a ray casting unit that is to cast a virtual ray at the selected point that intersects the surface. The apparatus also includes a computing unit to compute a least one three-dimensional location for the selected point based on the intersection of the virtual ray and the plane. 1. An apparatus for estimation of 3D point candidates from a single image , comprising:a surface fitting mechanism that is to estimate the analytical model of a surface on which lies the point of the single image;a ray casting unit that is to cast a virtual ray at the selected point that intersects the surface; anda computing unit to compute a least one three-dimensional location for the selected point based on the intersection of the virtual ray and the plane.2. The apparatus of claim 1 , wherein the surface fitting mechanism is a plane fitting mechanism that is to calculate one or more planes for a selected point of the single image.3. The apparatus of claim 1 , wherein the surface is computed using the selected point and a plurality of points in a neighborhood of the selected point.4. The apparatus of claim 1 , wherein the surface fitting mechanism is to calculate the surface via Sequential RANSAC plane fitting.5. The apparatus of claim 1 , wherein no three dimensional location exists for the selected point.6. The apparatus of claim 1 , comprising an image capture mechanism claim 1 , wherein the image capture mechanism in an RGB-D camera.7. The apparatus of claim 1 , comprising an image capture mechanism claim 1 , wherein the image capture mechanism is a time of flight (ToF) camera claim 1 , ranging camera claim 1 , flash LIDAR claim 1 , or any combination thereof.8. The apparatus of claim 1 , wherein the surface fitting mechanism is to calculate a ...

Подробнее
04-01-2018 дата публикации

Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users

Номер: US20180005429A1
Принадлежит:

Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene. 1. A computer-implemented method for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes , comprising:providing a first perspective of a VR scene to a first HMD of a first user;receiving an indication that a second user of a second HMD is requesting to join the VR scene provided to the first HMD;obtaining real-world position and orientation data of the second HMD relative to the first HMD; andproviding, based on the real world position and orientation data, a second perspective of the VR scene in response to the request to join the VR scene by the second user;wherein the first perspective and the second perspective of the VR scene are each controlled by respective position and orientation changes while viewing the VR scene.2. The computer-implemented method as recited in claim 1 , wherein the first user of the first HMD and the second user of the second HMD are co-located in a real-world location.3. The computer-implemented method as recited in claim 1 , wherein the first user of the first HMD controls a progression of the VR scene claim 1 , the progression of the VR scene depending upon changes in real-world position and orientation of the first user while viewing the VR scene.4. The computer-implemented method as ...

Подробнее
04-01-2018 дата публикации

DISPLAY CONTROL METHOD AND SYSTEM FOR EXECUTING THE DISPLAY CONTROL METHOD

Номер: US20180005431A1
Принадлежит:

A display control method for execution by a system including a head-mounted device. The display control method includes generating virtual space data for defining a three-dimensional virtual space. The display control method further includes displaying a visual-field image on the head-mounted device based on a visual field of the virtual space data. The display control method further includes updating the visual-field image in response to a detected movement of the head-mounted device exceeding a threshold. Updating the visual-field image includes changing a scale of an object in the virtual space by adjusting an angular range of the visual-field image. 19-. (canceled)10. A method , comprising:defining a three-dimensional virtual space including a virtual camera;displaying a visual-field image on a head-mounted device based on a visual field of the virtual camera; andupdating the visual-field image in response to a detected movement of the head-mounted device; changing a scale of the virtual camera in response to a predetermined condition is satisfied;changing a rendering range of the virtual camera in response to changing a scale of the virtual camera;updating the visual-field image in response to changing a rendering range of the virtual camera.11. The method according to claim 10 , defining a center position of the virtual camera in the three-dimensional virtual space;', 'defining a position of an image acquisition unit of the virtual camera for defining the rendering range in the three-dimensional virtual space;', 'changing a position of the image acquisition unit in response to changing a scale of the virtual camera;', 'correcting the center position of the virtual camera for remaining the position of the image acquisition unit in a visual-axis direction of the virtual camera constant during the updating of the visual-field image., 'further comprising;'}12. The method according to claim 10 , further comprising;defining a center position of the virtual camera in ...

Подробнее
04-01-2018 дата публикации

System and Methods for Interactive Hybrid-Dimension Map Visualization

Номер: US20180005434A1
Автор: Ren Liu, Yang Lei
Принадлежит:

A navigational system includes a hybrid-dimensional visualization scheme with a multi-modal interaction flow to serve for digital mapping applications, such as in car infotainment systems and online map services. The hybrid-dimensional visualization uses an importance-driven or focus-and-context visualization approach to combine the display of different map elements, including 2D map, 2D building footprint, 3D map, weather visualization, realistic day-night lighting, and POI information, into a single map view. The combination of these elements is guided by intuitive user interactions using multiple modalities simultaneously, such that the map information is filtered to best respond to the user's request, and presented in a way that presents both the focus and the context in a map in an aesthetic manner. The system facilitates several use cases that are common to the users, including destination preview, destination search, and virtual map exploration. 1. A method for generating map graphics including two-dimensional and three-dimensional elements comprising:generating with a processor a graphical display of a two-dimensional base map with a display device;receiving with a multi-modal input device a user selection of a region of interest in the two-dimensional base map, the region of interest corresponding to only a portion of the two-dimensional base map;identifying with the processor a first footprint region in the two-dimensional base map, the first footprint region including the region of interest and a first portion of the two-dimensional base map outside of the region of interest;generating a three-dimensional graphical display of at least one building, terrain feature, or landmark located within the region of interest with the graphical display device; andgenerating a first two-dimensional graphical display in the first footprint region including at least one graphical element not present in the two-dimensional base map with the graphical display device.2. ...

Подробнее
04-01-2018 дата публикации

ACCURATE POSITIONING OF AUGMENTED REALITY CONTENT

Номер: US20180005450A1
Принадлежит: Bent Image Lab, LLC

A system for accurately positioning augmented reality (AR) content within a coordinate system such as the World Geodetic System (WGS) may include AR content tethered to trackable physical features. As the system is used by mobile computing devices, each mobile device may calculate and compare relative positioning data between the trackable features. The system may connect and group the trackable features hierarchically, as measurements are obtained. As additional measurements are made of the trackable features in a group, the relative position data may be improved, e.g., using statistical methods. 1. A computer-implemented method for accurately locating augmented reality (AR) content , the method comprising:measuring, using one or more sensors of a computing device, a first six-degree-of-freedom (DOF) vector between a first vantage point of the computing device and a first trackable feature in a first sensor range of the computing device;measuring, using the one or more sensors of the computing device, a second six-DOF vector between a second vantage point of the computing device and a second trackable feature in a second sensor range of the computing device;measuring, using the one or more sensors of the computing device, a third six-DOF vector between the first vantage point and the second vantage point;estimating, based on the first vector, the second vector, and the third vector, using a processor of the computing device, a fourth six-DOF vector between the first trackable feature and the second trackable feature;storing the estimated fourth vector in a data store in association with the first trackable feature and the second trackable feature, such that the estimated fourth vector represents a spatial relationship between the first trackable feature and the second trackable feature;determining, within a common spatial coordinate system, a first set of spatial coordinates for the first trackable feature and a second set of spatial coordinates for the second ...

Подробнее
04-01-2018 дата публикации

AUGMENTED REALITY CONTENT RENDERING VIA ALBEDO MODELS, SYSTEMS AND METHODS

Номер: US20180005453A1
Принадлежит: NANT HOLDINGS IP, LLC

Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user. 135-. (canceled)36. A method of rendering augmented reality content , comprising:obtaining, by a rendering device, an albedo model related to a patient in a medical environment, the albedo model comprising portions corresponding to portions of the patient, wherein each portion of the albedo model includes lighting rules selected based on a reflective nature of a corresponding portion of the patient;obtaining, by the rendering device, augmented reality (AR) content related to the patient;deriving, by the rendering device, a pose of the patient or of one or more portions of the patient from a digital representation of the patient;aligning, by the rendering device, the albedo model with the pose;deriving, by the rendering device, observed shading data from the digital representation and the albedo model;deriving an estimated object shading model using the albedo model and the observed shading data;generating, by the rendering device, environmentally adjusted AR content by applying the estimated object shading model to the AR content; andrendering, by the rendering device, the environmentally adjusted AR content.37. The method of claim 36 , wherein the portions of the patient comprise at least one of skin claim 36 , a face claim 36 , lips claim 36 , eyes claim 36 , and a tissue.38. The method of claim 36 , wherein the AR content comprises at least one of a medical image claim 36 , ...

Подробнее
02-01-2020 дата публикации

Producing rendering outputs from a 3-d scene using volume element light transport data

Номер: US20200005519A1
Принадлежит: Imagination Technologies Ltd

Rendering system combines point sampling and volume sampling operations to produce rendering outputs. For example, to determine color information for a surface location in a 3-D scene, one or more point sampling operations are conducted in a volume around the surface location, and one or more sampling operations of volumetric light transport data are performed farther from the surface location. A transition zone between point sampling and volume sampling can be provided, in which both point and volume sampling operations are conducted. Data obtained from point and volume sampling operations can be blended in determining color information for the surface location. For example, point samples are obtained by tracing a ray for each point sample, to identify an intersection between another surface and the ray, to be shaded, and volume samples are obtained from a nested 3-D grids of volume elements expressing light transport data at different levels of granularity.

Подробнее
02-01-2020 дата публикации

METHOD AND APPARATUS FOR CONSTRUCTING LIGHTING ENVIRONMENT REPRESENTATIONS OF 3D SCENES

Номер: US20200005527A1
Принадлежит:

A synthesis lighting environment representation of a 3D scene is constructed by receiving () data representative of at least one first image of the scene taken from at least one location outside the scene; receiving () data representative at least one second image of the scene containing at least one light source illuminating the scene and taken from at least one filming position inside the scene; merging () a first lighting environment representation derived from the data representative of the first image(s) and a second lighting environment representation derived from the data representative of the second image(s) into the synthesis lighting environment representation (Rep). Applications to augmented and mixed reality. 115-. (canceled)16. A method for constructing a synthesis lighting environment representation of a 3D scene , the method comprising:obtaining, from at least one first device, data representative of at least one first image of the 3D scene taken from at least one location outside the 3D scene;obtaining, from at least one second device, data representative of at least one second image of the 3D scene taken from at least one filming position inside the 3D scene;merging a first lighting environment representation derived from the data representative of the at least one first image and a second lighting environment representation derived from the data representative of the at least one second image into the synthesis lighting environment representation.17. The method of claim 16 , wherein obtaining from at least one first device claim 16 , and obtaining from at least one second device comprises obtaining claim 16 , from a same device claim 16 , data representative of at least one first image of the 3D scene taken from at least one location outside the 3D scene and data representative of at least one second image of the 3D scene taken from at least one filming position inside the 3D scene.18. The method of claim 16 , further comprising:obtaining the at ...

Подробнее
02-01-2020 дата публикации

APPARATUS AND METHOD FOR CONSTRUCTING A VIRTUAL 3D MODEL FROM A 2D ULTRASOUND VIDEO

Номер: US20200005528A1
Принадлежит:

A method for creating a three-dimensional image of an object from a two-dimensional ultrasound video is provided. The method includes acquiring a plurality of two-dimensional ultrasound images of the object and recording a plurality of videos based on the acquired two-dimensional ultrasound images. Each of the videos includes a plurality of frames. The method further includes separating each of the plurality of frames, cropping each of the plurality of frames to isolate structures intended to be reconstructed, selecting a frame near a center of the object and rotating the image to create a main horizontal landmark, and aligning each frame to the main horizontal landmark. The method also includes removing inter-frame jitter by aligning each of the plurality of frames relative to a previous frame of the plurality of frames, reducing the noise of each of the frames, and stacking each of the frames into a three-dimensional volume. 1. A method for creating a three-dimensional image of an object from a two-dimensional ultrasound video , the method comprising:acquiring a plurality of two-dimensional ultrasound images of the object;recording a plurality of videos based on the acquired two-dimensional ultrasound images, each of the plurality of videos comprising a plurality of frames;separating each of the plurality of frames;cropping each of the plurality of frames to isolate structures intended to be reconstructed;selecting a frame near a center of the object and rotating the image to create a main horizontal landmark;aligning each of the plurality of frames to the main horizontal landmark; andstacking each of the aligned plurality of frames into a three-dimensional volume.2. The method according to claim 1 , further comprising:removing inter-frame jitter by aligning each of the plurality of frames relative to a previous frame of the plurality of frames.3. The method according to claim 1 , further comprising:reducing a noise of each of the plurality of frames.4. The method ...

Подробнее
03-01-2019 дата публикации

Provision of Virtual Reality Content

Номер: US20190005728A1
Принадлежит:

A method is disclosed, including providing data indicative of dimensions of a real-world space within which a virtual world is to be consumed. The method may also include identifying one or more objects within said real-world space, and determining one or more available areas within the real-world space for rendering three-dimensional virtual content, based at least partly on the dimensions of the real-world space. The method may also include identifying one or more of the objects as being movable, identifying, from a set of three-dimensional virtual content items, one or more candidate items unable to be rendered within the available area(s) and which can be rendered if one or more of the movable objects is moved and providing an indication to a virtual reality user device of the candidate virtual item(s) and of the movable object(s) required to be moved. 2. The method of claim 1 , wherein identifying the one or more movable objects comprises assigning a mobility score to each object indicative of whether or not it is movable claim 1 , an object being identified as movable if its mobility score is above a predetermined threshold.3. The method of claim 2 , wherein the mobility score is based on characteristics of the objects and/or their respective position(s) within the real-world space.4. The method of claim 3 , wherein the mobility score is based on identifying a change in one or more objects' position over time.5. The method of claim 2 , wherein the mobility score is based on one or more of:the size and/or weight of the object;identifying and classifying a real-world object against a set of real-world objects having pre-assigned mobility scores; anddetermining whether the same object has previously been identified as movable.6. The method of claim 2 , wherein the mobility score is determined by determining for each identified object a plurality of probability coefficients for the objects based on their respective characteristics and/or positions claim 2 , the ...

Подробнее
12-01-2017 дата публикации

DEVICE AND METHOD FOR SUBGIGIVAL MEASUREMENT

Номер: US20170007377A1
Принадлежит: Dentlytec G.P.L. LTD.

A method for measuring regions of a tooth in a mouth including: measuring at least one surface point on a surface of the tooth with respect to an element mechanically coupled to said surface point; determining a location of at least one visible reference mechanically coupled to said surface point with respect to said element; estimating a location of said surface point with respect to said visible reference. A device used for such measuring may include a main body comprising a final optical element of an imager which defines an optical field of view directed in a first direction; and a measurement element coupled to said main body extending generally in said first direction; where a tip of said measurement element is sized and shaped to be inserted between a tooth and adjacent gingiva; where said optical field of view is sized to image at least part of a tooth. 160-. (canceled)61. An intra-oral adaptor for an intra-oral scanner (IOS) having at least one image sensor with a field of view (FOV) of at least a portion of a tooth , comprising:at least one connector configured to couple to an intra-oral scanner (IOS); andat least one probe coupled to said connector,wherein upon coupling of said connector with said IOS, at least a portion of said probe is viewable within said FOV.62. The adaptor according to claim 61 , wherein said connector comprises an inner geometry configured to mount with at least a portion of said IOS.63. The adaptor according to claim 61 , wherein said adaptor further comprises at least one sealed transparent window aligned with the FOV of said IOS sensor.64. The adaptor according to claim 61 , wherein said connector rigidly couples said adaptor to said IOS.65. The adaptor according to claim 61 , wherein once coupled claim 61 , said adaptor and said probe are configured with freedom of movement with respect to said IOS.66. The adaptor according to claim 61 , wherein said adaptor further comprises at least one force sensor coupled to said probe67. ...

Подробнее
07-01-2021 дата публикации

METHODS AND APPARATUS FOR RECEIVING AND/OR USING REDUCED RESOLUTION IMAGES

Номер: US20210006770A1
Принадлежит:

Methods and apparatus for using selective resolution reduction on images to be transmitted and/or used by a playback device are described. Prior to transmission one or more images of an environment are captured. Based on image content, motion detection and/or user input a resolution reduction operation is selected and performed. The reduced resolution image is communicated to a playback device along with information indicating a UV map corresponding to the selected resolution allocation that should be used by the playback device for rendering the communicated image. By changing the resolution allocation used and which UV map is used by the playback device different resolution allocations can be made with respect to different portions of the environment while allowing the number of pixels in transmitted images to remain constant. The playback device renders the individual images with the UV map corresponding to the resolution allocation used to generate the individual images. 1. An image capture and content streaming method , comprising:capturing a first image of a first portion of an environment and a second image of a second portion of the environment;identifying a region of interest in the environment;determining whether the second portion of the environment includes the region of interest;performing a resolution reduction operation on the second image, based on a determination that the second portion of the environment does not include the region of interest, to form a reduced resolution second image;encoding the first image and the reduced resolution second image; andoutputting the encoded first image and the encoded reduced resolution second image for transmittal to a playback device.2. The method of claim 1 , wherein the region of interest in the environment is identified based on motion in the environment.3. The method of claim 1 , wherein the region of interest in the environment is identified based on motion detected in one or more video streams of the ...

Подробнее
20-01-2022 дата публикации

THREE-DIMENSIONAL LAYERED MAP

Номер: US20220020209A1
Автор: GAITER Felix Ross
Принадлежит:

A map having surfaces that are depicted at different levels that are not related to topography, with boundaries between the surfaces, where the boundaries are disposed at travel ways. The travel ways form cliff faces in the map between the surfaces, with information items disposed on the cliff faces at positions corresponding to items of interest at locations along the travel ways where the information items are disposed. 1. A non-transitory computer-readable medium containing program instructions stored on a memory thereon that when executed by at least one processor , cause the at least one processor to perform operations comprising:displaying a map of a geographical area;dividing the geographical area into a first surface and a second surface, wherein the first surface is depicted at a first non-topographical level of the geographical area different from a second non-topographical level of the geographical area depicting the second surface;forming a cliff face in the geographical area between the first surface and the second surface, wherein the cliff face is displayed as a textured material indicative of a surface that is different than the first surface and the second surface; anddisposing an information item relating to the geographical area on the textured material.2. The non-transitory computer-readable medium of claim 1 , wherein the operations further comprise:adjusting a shape of the textured material in response to a received interactive input.3. The non-transitory computer-readable medium of claim 2 , wherein the adjusting operation further comprises:expanding the cliff face in response to the interactive input to display a greater amount of information relating to a location associated with the interactive input.4. The non-transitory computer-readable medium of claim 1 , wherein the textured material includes a transparent texture.5. The non-transitory computer-readable medium of claim 4 , wherein the operations further comprise:displaying a portion of ...

Подробнее
14-01-2021 дата публикации

Virtual Puppeteering Using a Portable Device

Номер: US20210008461A1

A virtual puppeteering system includes a portable device including a camera, a display, a hardware processor, and a system memory storing an object animation software code. The hardware processor is configured to execute the object animation software code to, using the camera, generate an image in response to receiving an activation input, using the display, display the image, and receive a selection input selecting an object shown in the image. The hardware processor is further configured to execute the object animation software code to determine a distance separating the selected object from the portable device, receive an animation input, identify, based on the selected object and the received animation input, a movement for animating the selected object, generate an animation of the selected object using the determined distance and the identified movement, and render the animation of the selected object.

Подробнее
10-01-2019 дата публикации

Racing simulation

Номер: US20190009175A1
Принадлежит: Buxton Global Enterprises Inc

A method for displaying a virtual vehicle includes identifying a position of a physical vehicle at a racecourse, identifying a position of a point of view at the racecourse, providing a portion of the virtual vehicle visible from a virtual position of the point of view. The method operates by calculating the virtual position within a virtual world based on the position of the point of view. A system for displaying virtual vehicles includes a first sensor detecting a position of a physical vehicle at a racecourse, a second sensor detecting a position of a point of view at the racecourse, and a simulation system providing a portion of the virtual vehicle visible from a virtual position of the point of view. The simulation system is configured to calculate the virtual position of the point of view within a virtual world based on the position of the point of view.

Подробнее
27-01-2022 дата публикации

METHOD FOR FORMING WALLS TO ALIGN 3D OBJECTS IN 2D ENVIRONMENT

Номер: US20220027524A1
Автор: Jovanovic Milos
Принадлежит:

Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2) environment. The method may include capturing the 2D environment and adding scale and perspective to the 2D environment. Further, a user may select intersection points on a ground plane of the 2D environment to form walls, thereby converting the 2D environment into a 3D space. The user may further add 3D models of objects on the wall plane such that the objects may remain flush with the wall plane. 1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment , the method comprising:receiving, with a processor via a user interface, from a user, a ground plane input comprising a plurality of ground plane points selected by the user to define a ground plane corresponding to a horizontal plane of the two-dimensional environment;automatically generating, with the processor, and displaying, via a display unit, a three-dimensional environment for the two-dimensional environment based on the ground plane input;automatically generating, with the processor, and displaying, via the display unit, a wall plane, representing a vertical plane of the two-dimensional environment orthogonal to the horizontal plane, in the three-dimensional environment positioned at at least two wall-floor intersection points selected by the user; andsuperimposing, with the processor, and displaying, via the display unit, the three-dimensional model of the object on the three-dimensional environment for the two-dimensional environment based on the ground plane input and the wall-floor intersection points.2. The method of claim 1 , further comprising:receiving, with the processor via the user interface, from the user, input comprising a selection of a wall-hidden surface intersection point on the two-dimensional environment, the wall-hidden surface intersection point indicating a second plane behind the wall plane;automatically generating, ...

Подробнее
12-01-2017 дата публикации

GARMENT CAPTURE FROM A PHOTOGRAPH

Номер: US20170011551A1
Принадлежит:

Provided is a new method which creates the virtual garment from a single photograph of a real garment put on to the mannequin. The method uses the pattern drafting theory in the clothing field. The drafting process is abstracted into a computer module, which takes the garment type and primary body sizes then produces the draft as the output. Then the problem is reduced to find out the garment type and primary body sizes. That information is found by analyzing the silhouette of the garment with respect to the mannequin. The method works robustly and produces practically usable virtual clothes that can be used for the graphical coordination. 1. A method for garment capturing from a photograph of a garment , the method comprising steps for:inputting a photograph of the garment;extracting a silhouette of the garment from the photograph;identifying a garment type and a plurality of primary body sizes (PBSs) and creating a plurality of sized drafts;generating a plurality of panels using the garment type and the plurality of PBSs; anddraping the plurality of panels on a mannequin.2. The method of claim 1 , prior to the step for inputting claim 1 , further comprising steps for:providing a camera and the mannequin, wherein the positions of the camera and the mannequin are fixed, so that photographs taken with and without the garment have pixel-to-pixel correspondence; andpre-processing the mannequin to obtain and store three-dimensional geometry of the mannequin and primary body sizes (PBSs).3. The method of claim 2 , wherein the step for pre-processing the mannequin comprises steps for:scanning the mannequin;modeling the scanned data graphically; andstoring the graphically modeled data in a computer file,wherein relationship between real world distance and pixel distance of a plurality points of the mannequin and an environment in which the camera and the mannequin are disposed is established a computer using the graphically modeled data.4. The method of claim 2 , wherein ...

Подробнее
14-01-2016 дата публикации

THREE-DIMENSIONAL IMAGE OUTPUT DEVICE AND BACKGROUND IMAGE GENERATION DEVICE

Номер: US20160012627A1
Принадлежит:

A projection (projected image) is drawn by perspective projection of a three-dimensional model with a background image having improved reality. When a sightline of the perspective projection looks down from above, the projected image is drawn into an object drawing area which is a lower part of an image picture. A background layer representing the stratosphere is separately generated by two-dimensionally drawing a background image, in which the stratosphere (hatched area) is opaque, while the remaining area is transparent. The boundary between the opaque portion and the transparent portion forms a curved line that is convex upward to express a curved horizon. The background layer is superimposed in front of the projected image, not behind the projected image, thereby covering an upper edge portion of the projected image including a straight-lined upper edge, so as to provide a curved boundary realizing a curved pseudo horizon in the image picture. 1. A three-dimensional image output device that outputs a three-dimensional image in which an object is drawn three-dimensionally , the three-dimensional image output device comprising:a three-dimensional model storage that stores a three-dimensional model representing a three-dimensional shape of the object;a projecting section that uses the three-dimensional model and generates a three-dimensional object image that expresses the object three-dimensionally;a background layer generating section that generates a background layer, in which a background image of the three-dimensional image is drawn to have a transparent portion and an opaque portion; andan image output controller that superimposes the background layer on a front surface of the three-dimensional object image to generate the three-dimensional image and outputs the three-dimensional image, whereinat least one of a generating condition of the three-dimensional object image and a generating condition of the background layer is adjusted to cause the opaque portion ...

Подробнее
11-01-2018 дата публикации

ELECTRONIC APPARATUS AND DISPLAYING METHOD THEREOF

Номер: US20180012379A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A display apparatus and a displaying method thereof. The electronic apparatus includes a display, a bezel configured to house the display and includes a groove having designated size and depth, an image sensor configured to acquire an image of a shadow that is generated in the groove by light incident from outside, and a processor configured to control the display to display a graphic effect based on the shadow image acquired by the image sensor. 1. An electronic apparatus , comprising:a display;a bezel configured to house the display and comprises a groove;an image sensor configured to acquire an image of a shadow which is generated in the groove by light from outside; anda processor configured to control the display to display a graphic effect based on the shadow image acquired by the image sensor.2. The apparatus as claimed in claim 1 , wherein the groove comprises a bottom surface and a plurality of side surfaces claim 1 ,wherein the bottom surface has a semi-transparent film, andwherein, in response to the light being incident on at least one of the plurality of side surfaces, a shadow of the at least one side is generated on the semi-transparent film of the bottom surface.3. The apparatus as claimed in claim 2 , wherein the image sensor acquires the shadow of the at least one side surface which is generated on the semi-transparent film of the bottom surface as the shadow image.4. The apparatus as claimed in claim 1 , wherein the processor determines at least one of a direction of the light claim 1 , an intensity of the light claim 1 , a direction of the shadow claim 1 , and a length of the shadow claim 1 , based on the shadow image acquired by the image sensor.5. The apparatus as claimed in claim 4 , wherein the processor determines a contrast of the light by determining a grayscale average value of pixels around right claim 4 , left claim 4 , upper and lower boundaries of the image based on the shadow image acquired by the image sensor.6. The apparatus as ...

Подробнее
11-01-2018 дата публикации

METHOD FOR DEPICTING AN OBJECT

Номер: US20180012394A1

The invention relates to technologies for visualizing a three-dimensional (3D) image. According to the claimed method, a 3D model is generated, images of an object are produced, a 3D model is visualized, the 3D model together with a reference pattern and also coordinates of texturing portions corresponding to polygons of the 3D model are stored in a depiction device, at least one frame of the image of the object is produced, the object in the frame is identified on the basis of the reference pattern, a matrix of conversion of photo image coordinates into dedicated coordinates is generated, elements of the 3D model are coloured in the colours of the corresponding elements of the image by generating a texture of the image sensing area using the coordinate conversion matrix and data interpolation, with subsequent designation of the texture of the 3D model. 116-. (canceled)17. A method of displaying a virtual object on a computing device , comprising a memory , a camera , and a display , the memory being adapted to store at least one reference image and at least one 3D model , wherein each reference image being associated with one 3D model , the method comprising:acquiring an image from the camera,recognizing the virtual object on the acquired image based upon a reference image,forming a 3D model associated with the reference image,forming a transformation matrix for juxtaposing coordinates of the acquired image with coordinates of the 3D model;juxtaposing coordinates of texturized sections of the acquired image to corresponding sections of the 3D model;painting the sections of the 3D model using colors and textures of the corresponding sections of the acquired image, anddisplaying the 3D model over a video stream using augmented reality tools and/or computer vision algorithms.18. The method of claim 17 , wherein the 3D model is represented by polygons.19. The method of claim 18 , wherein the transformation matrix is adapted to juxtapose coordinates of the texturized ...

Подробнее
11-01-2018 дата публикации

IMMERSIVE CONTENT FRAMING

Номер: US20180012397A1
Автор: Carothers Trevor
Принадлежит:

A virtual view of a scene may be generated through the use of various systems and methods. In one exemplary method, from a tiled array of cameras, image data may be received. The image data may depict a capture volume comprising a scene volume in which a scene is located. A viewing volume may be defined. A virtual occluder may be positioned at least partially within the capture volume such that a virtual window of the virtual occluder is between the viewing volume and the scene. A virtual viewpoint within the viewing volume may be selected. A virtual view may be generated to depict the scene from the virtual viewpoint. 1. A method for generating a virtual view of a scene , the method comprising:from a tiled array of cameras, receiving image data depicting a capture volume comprising a scene volume having a scene;at a processor, defining a scene volume within the capture volume, the scene volume having a scene;at the processor, defining a viewing volume;at the processor, positioning a virtual occluder at least partially within the capture volume such that a virtual window of the virtual occluder is between the viewing volume and the scene;at an input device, receiving input selecting a virtual viewpoint within the viewing volume; andat the processor, generating a virtual view depicting the scene from the virtual viewpoint.2. The method of claim 1 , wherein the tiled array of cameras comprises a plurality of cameras arranged in a planar array.3. The method of claim 1 , wherein the tiled array of cameras comprises a plurality of cameras arranged in a semispherical array claim 1 , with each of the cameras oriented toward a center of the semispherical array.4. The method of claim 1 , wherein the tiled array of cameras comprises a plurality of cameras arranged in a semispherical array claim 1 , with each of the cameras oriented away from a center of the semispherical array.5. The method of claim 1 , wherein the virtual window is positioned after selection of the virtual ...

Подробнее
11-01-2018 дата публикации

METHODS AND SYSTEMS OF GENERATING A PARAMETRIC EYE MODEL

Номер: US20180012401A1
Принадлежит:

Systems and techniques for generating a parametric eye model of one or more eyes are provided. The systems and techniques may include obtaining eye data from an eye model database. The eye data includes eyeball data and iris data corresponding to a plurality of eyes. The systems and techniques may further include generating an eyeball model using the eyeball data. Generating the eyeball model includes establishing correspondences among the plurality of eyes. The systems and techniques may further include generating an iris model using the iris data. Generating the iris model includes sampling one or more patches of one or more of the plurality of eyes using an iris control map and merging the one or more patches into a synthesized texture. The systems and techniques may further include generating the parametric eye model that includes the eyeball model and the iris model. 1. A computer-implemented method of generating a parametric eye model of one or more eyes , comprising:obtaining eye data from an eye model database, the eye data including eyeball data and iris data corresponding to a plurality of eyes;generating an eyeball model using the eyeball data, wherein generating the eyeball model includes establishing correspondences among the plurality of eyes;generating an iris model using the iris data, wherein generating the iris model includes sampling one or more patches of one or more of the plurality of eyes using an iris control map and merging the one or more patches into a synthesized texture; andgenerating the parametric eye model, the parametric eye model including the eyeball model and the iris model.2. The method of claim 1 , further comprising:generating a vein model including a vein network, wherein veins in the network are grown from seed points in directions and by amounts controlled by one or more vein recipes, and wherein the parametric eye model includes the vein model.3. The method of claim 1 , wherein the eyeball model includes a principal ...

Подробнее
11-01-2018 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT

Номер: US20180012529A1
Автор: CHIBA Taketo
Принадлежит: RICOH COMPANY, LTD.

An information processing apparatus configured to paste a full-spherical panoramic image along an inner wall of a virtual three-dimensional sphere; calculate an arrangement position for arranging a planar image closer to a center point of the virtual three-dimensional sphere than the inner wall, in such an orientation that a line-of-sight direction from the center point to the inner wall and a perpendicular line of the planar image are parallel to each other, the planar image being obtained by pasting an embedding image to be embedded in the full-spherical panoramic image, on a two-dimensional plane; and display a display image on a display unit. The display image is a two-dimensional image viewed from the center point in the line-of-sight direction in a state in which the full-spherical panoramic image is pasted along the inner wall of the virtual three-dimensional sphere and the planar image is arranged at an arrangement position. 1. An information processing apparatus comprising:a pasting unit configured to paste a full-spherical panoramic image obtained by imaging an omnidirectional range, along an inner wall of a virtual three-dimensional sphere arranged in a virtual three-dimensional space;an acquiring unit configured to acquire an embedding image to be embedded in the full-spherical panoramic image;a generating unit configured to generate a planar image obtained by pasting the embedding image on a two-dimensional plane;a calculating unit configured to calculate an arrangement position for arranging the planar image closer to a center point of the virtual three-dimensional sphere than the inner wall, in such an orientation that a line-of-sight direction from the center point to the inner wall and a perpendicular line of the planar image are parallel to each other; anda display control unit configured to display a display image on a display unit, the display image being obtained by converting a state in which the full-spherical panoramic image is pasted along ...

Подробнее
10-01-2019 дата публикации

SYSTEMS, METHODS AND, MEDIA FOR SIMULATING DEFORMATIONS OF NONLINEAR ELASTIC BODIES

Номер: US20190012831A1
Принадлежит:

In accordance with some embodiments, systems, methods and media for simulating deformation of an elastic body are provided. In some embodiments, a method comprises: determining for each macroblock, a stiffness matrix Kof a portion of a model of a non-linear elastic solid partitioned into cells; converting Kinto block form to include a submatrix Kfor nodes between internal cells of a first macroblock; determining at least a portion of K; receiving input corresponding to force applied to cells of the model; determining displacements of exterior nodes of the first macroblock using the input and the portion of K; determining displacements of interior nodes of the first macroblock using the input and the displacements of exterior nodes; determining updated positions of the cells based on the displacements of the exterior nodes; and, causing the model to be presented using the updated positions. 1. A method for simulating deformation of an elastic body , the method comprising:{'img': [{'@id': 'CUSTOM-CHARACTER-00013', '@he': '3.22mm', '@wi': '2.12mm', '@file': 'US20190012831A1-20190110-P00001.TIF', '@alt': 'custom-character', '@img-content': 'character', '@img-format': 'tif'}, {'@id': 'CUSTOM-CHARACTER-00014', '@he': '3.22mm', '@wi': '2.12mm', '@file': 'US20190012831A1-20190110-P00001.TIF', '@alt': 'custom-character', '@img-content': 'character', '@img-format': 'tif'}, {'@id': 'CUSTOM-CHARACTER-00015', '@he': '3.22mm', '@wi': '2.12mm', '@file': 'US20190012831A1-20190110-P00001.TIF', '@alt': 'custom-character', '@img-content': 'character', '@img-format': 'tif'}], 'sub': i', '1', 'i', 'i', 'i, 'determining, using a hardware processor, for each of a plurality of macroblocks , including a first macroblock , a stiffness matrix Kcorresponding to at least a portion of a model of a non-linear elastic solid that is partitioned into a plurality of cells, wherein entries in the stiffness matrix Kcorrespond to nodes associated with cells of the macroblock ;'}{'img': [{'@id': 'CUSTOM- ...

Подробнее
14-01-2021 дата публикации

SYSTEMS AND ASSOCIATED METHODS FOR CREATING A VIEWING EXPERIENCE

Номер: US20210012557A1
Автор: Rowley Marc
Принадлежит:

Systems and processes generate a viewing experience by determining location data and movement data of (a) at least one object and (b) at least one participant within an event area. A three-dimensional model of the event area, the participant and the object is determined based upon the location data and the movement data. A viewpoint of a spectator defines an origin, relative to the three-dimensional model, and a direction of the viewing experience. The viewing experience is generated for the viewpoint at least in part from the three-dimensional model to include one or more of augmented reality, mixed reality, extended reality, and virtual reality. 1. A process for generating a viewing experience , comprising:determining, in real-time, location data and movement data of both a participant and an object within an event area;generating, in real-time, a three-dimensional model of the event area, the participant, and the object based, at least in part, upon the location data and the movement data;receiving, from a spectator, a viewpoint defining an origin and a direction of the viewing experience relative to the three-dimensional model, the viewpoint being any origin and any direction;determining when an obstruction is located between the viewpoint and one of the object and the participant; andgenerating the viewing experience from the three-dimensional 3D model for the viewpoint and with at least part of the obstruction removed from the viewing experience.2. The process of claim 1 , the step of generating comprising implementing bokeh by blurring parts of the viewing experience that are less important to reduce latency of generating the viewing experience.3. The process of claim 1 , further comprising enhancing the three-dimensional model using light-field data captured relative to the object and the participant.4. The process of claim 1 , further comprising enhancing the three-dimensional model using light-field data captured relative to the viewpoint.5. The process of ...

Подробнее
14-01-2021 дата публикации

SYSTEMS AND METHODS FOR GENERATING AND INTELLIGENTLY DISTRIBUTING FORMS OF EXTENDED REALITY CONTENT

Номер: US20210012578A1
Принадлежит:

A system for facilitating cross-platform extended reality content sharing is configurable to receive a model file that includes a model, create a plurality of extended reality (XR) model or scene files based on the model file, create a universal link, and send the universal link to a developer system. Each of the plurality of XR model or scene files is formatted for rendering a representation of the model on a different, particular XR rendering platform included in a list of XR rendering platforms. The universal link is operable to configure an end user device to send a request for an XR model or scene file. The universal link points to an endpoint that comprises logic operable to determine, based on the request, a particular XR model or scene file that is formatted for rendering a representation of the model using an XR rendering platform associated with the end user device. 1. A system for facilitating cross-platform extended reality content sharing , comprising:one or more processors; and receive, from a developer system, a model file comprising a model;', 'create, based on the model file, a plurality of extended reality (XR) model or scene files, each of the plurality of XR model or scene files being formatted for rendering a representation of the model on a different, particular XR rendering platform included in a list of XR rendering platforms;', 'create a universal link that is operable, when selected at an end user device, to configure the end user device to send a request for an XR model or scene file, the request comprising information for identifying an XR rendering platform associated with the end user device, the universal link pointing to an endpoint that comprises logic operable to determine, based on the information for identifying the XR rendering platform associated with the end user device, a particular XR model or scene file of the plurality of XR model or scene files, the particular XR model or scene file being formatted for rendering a ...

Подробнее
14-01-2021 дата публикации

Systems and methods for generating and intelligently distributing forms of virtual reality content

Номер: US20210012579A1
Принадлежит: Seek XR Inc

A method of providing virtual reality (VR) content can include the acts of, at a server: obtaining a 3D image file, creating a plurality of VR models or scene files from the 3D image for each VR rendering platform included in a list of VR rendering platforms, storing each VR model or scene file within a data store, receiving a request for an VR model or scene file as a result of a universal link being selected at an end user device, wherein the universal link points to an endpoint at the server that comprises logic to determine which of the plurality of stored VR models or scene files to provide to an entity accessing the universal link, determining an VR rendering platform associated with the end user device, and determining a particular VR model or scene file matching the VR rendering platform.

Подробнее
09-01-2020 дата публикации

AUTOMATED VIRTUAL ARTIFACT GENERATION THROUGH NATURAL LANGUAGE PROCESSING

Номер: US20200013211A1
Принадлежит:

Embodiments of the present invention provide a method, system and computer program product for automated virtual artifact generation through natural language processing. In an embodiment of the invention, a method for automated virtual artifact generation includes loading electronic documentation for a real world object into memory of a computer, parsing by a processor of the computer the electronic documentation into different words and storing the different words. The method further includes natural language processing the different words to determine different physical and functional attributes of the real world object, generating a virtual artifact in the memory of the computer based upon a mapping of the physical attributes of the real world object to structural attributes of the virtual artifact and a mapping of the functional attributes of the real world object to functional attributes of the virtual artifact, and rendering the virtual artifact in the virtual reality environment. 1. A method for automated virtual artifact generation through natural language processing comprising:loading electronic documentation for a real world object into memory of a computer;parsing by a processor of the computer the electronic documentation into different words and storing the different words in the memory;natural language processing the different words in the memory to determine different physical and functional attributes of the real world object;generating a virtual artifact in the memory of the computer based upon a mapping of the physical attributes of the real world object to structural attributes of the virtual artifact and a mapping of the functional attributes of the real world object to functional attributes of the virtual artifact; and,rendering the virtual artifact in a virtual reality environment.2. The method of claim 1 , wherein the natural language processing classifies the different functional attributes of the real world object based upon a library of pre ...

Подробнее
09-01-2020 дата публикации

AUGMENTING REAL-TIME VIEWS OF A PATIENT WITH THREE-DIMENSIONAL DATA

Номер: US20200013224A1
Принадлежит:

Augmenting real-time views of a patient with three-dimensional (3D) data. In one embodiment, a method may include identifying 3D data for a patient with the 3D data including an outer layer and multiple inner layers, determining virtual morphometric measurements of the outer layer from the 3D data, registering a real-time position of the outer layer of the patient in a 3D space, determining real-time morphometric measurements of the outer layer of the patient, automatically registering the position of the outer layer from the 3D data to align with the registered real-time position of the outer layer of the patient in the 3D space using the virtual morphometric measurements and using the real-time morphometric measurements, and displaying, in an augmented reality (AR) headset, one of the inner layers from the 3D data projected onto real-time views of the outer layer of the patient. 1. A method for augmenting real-time , non-image actual views of a patient with three-dimensional (3D) data , the method comprising:identifying 3D data for the patient, the 3D data including an outer layer of the patient and multiple inner layers of the patient; anddisplaying, in an augmented reality (AR) headset, one of the inner layers of the patient from the 3D data projected onto real-time, non-image actual views of the outer layer of the patient, the projected inner layer of the patient from the 3D data being confined within a volume of a virtual 3D shape.2. The method as recited in claim 1 , wherein:the virtual 3D shape is a virtual box; andthe virtual box includes a top side, a bottom side, a left side, a right side, a front side, and a back side.3. The method of claim 1 , wherein:the virtual 3D shape is configured to be controlled to toggle between displaying and hiding lines of the virtual 3D shape; andthe virtual 3D shape is configured to be controlled to reposition two-dimensional (2D) slices and/or 3D slices of the projected inner layer of the patient from the 3D data.4. The ...

Подробнее
19-01-2017 дата публикации

METHOD FOR A MOBILE DIMENSIONING DEVICE TO USE A DYNAMIC ACCURACY COMPATIBLE WITH NIST STANDARD

Номер: US20170016714A1
Принадлежит:

A mobile dimensioning device, i.e. a mobile dimensioner, is described that uses a dynamic accuracy while still being compatible with the NIST standard. Even if the accuracy division is dynamic and not predetermined, a mobile dimensioning device of the present invention reports the actual dimensioning prior to measurement capture and can therefore be certified and used in commercial transactions. 1. A mobile dimensioning device , comprising:a display;non-volatile storage;one or more sensors;an input subsystem;one or more processors; and derive one or more accuracy parameters based on information received from the one or more sensors for a measurement environment of an object being measured;', 'compute an accuracy level based on the one or more accuracy parameters;', 'determine if the accuracy level corresponds to a sufficient measurement environment;', 'if the accuracy level corresponds to a sufficient measurement environment; display, on the display, an indication that the measurement environment is sufficient and a capture icon to enable the measurement capture;', 'in response to an input received at the capture icon, capture the measurement;', 'display, on the display, the dimensions of the object; and', 'record the dimensions of the object., 'memory containing instructions executable by the one or more processors whereby the device is operable to2. The device of claim 1 , wherein the accuracy level is the accuracy division as defined by the National Institutes of Standards and Technology (NIST) standard.3. The device of claim 1 , wherein the accuracy parameters comprise at least one of the group consisting of: distance to the object claim 1 , viewing angle relative to the object claim 1 , temperature claim 1 , ambient light claim 1 , and quality of data from the one or more sensors.4. The device of claim 1 , wherein the one or more sensors comprise at least one of the group consisting of: optical sensors and measurement sensors.5. The device of claim 4 , wherein ...

Подробнее
18-01-2018 дата публикации

SHANGHAI UNIVERSITY OF ENGINEERING SCIENCE

Номер: US20180017375A1
Принадлежит:

The present invention relates to a parallel image measurement method oriented to the insulating layer thickness of a radial symmetrical cable section. The method conducts the non-contact high-accuracy measurement based on the machine vision and the image analysis, adopts a GPU multi-core parallel platform for the high-speed measurement, extracts the useful information from the section image of the radial symmetrical cable, and then measures the insulating layer thickness. Compared with the prior art, the present patent can lower the time consumed for the accurate measurement, fill in the blank of the high-accuracy parallel image measurement of the insulating layer thickness of the radial symmetrical cable section in the domestic cable industry, break down the monopoly and technology blockade by related foreign manufacturers and improve the technology level of on-line testing of product quality in China, expedite the production automation progress of domestic manufacturer. 1. A parallel image measurement method oriented to the insulating layer thickness of a radial symmetrical cable section , characterized in that said method conducts the non-contact high-accuracy measurement based on the machine vision and the image analysis , adopts a GPU multi-core parallel platform for the high-speed measurement , extracts the useful information from an image of said radial symmetrical cable section and then measures said insulating layer thickness.2. The parallel image measurement method oriented to the insulating layer thickness of a radial symmetrical cable section according to claim 1 , characterized in that said method comprises the following steps:1) Reading an image shot, calibrated by an industrial CCD camera;2) Extracting an inner and an outer contour of said radial symmetrical cable section from said image and calculating a mass center of said cable section;3) Subjecting the pixels of said inner contour to the sub-pixel pinpointing, connecting said mass center and said ...

Подробнее
21-01-2016 дата публикации

MAINTENANCE ASSISTANCE FOR AN AIRCRAFT BY AUGMENTED REALITY

Номер: US20160019212A1
Автор: Soldani Siegfried
Принадлежит:

A method for supporting aircraft maintenance, performed in a system comprising a display selection device and a portable device with a camera and an augmented reality display. The method comprises the steps of acquiring images of an equipment of the aircraft with the camera, and sending them to the display selection device; identifying the equipment present in these images with the display selection device and determining the identifier thereof, referred to as the useful identifier; on the basis of the useful identifier, sending maintenance assistance data with the display selection device to the augmented reality display; in response, displaying, in augmented reality, images corresponding to the data with the augmented reality display device. The method also comprises steps for displaying guidance data guiding towards one equipment in particular. A device for implementing such a method is also disclosed.

Подробнее
19-01-2017 дата публикации

Video imaging to assess specularity

Номер: US20170018114A1
Принадлежит: Microsoft Technology Licensing LLC

A method for virtual, three-dimensional modeling of a subject using a depth-imaging camera operatively coupled to a modeling computer. A brightness image and a coordinate depth image of the subject acquired from each of a plurality of inequivalent vantage points are received from the depth-imaging camera. An angle-dependent reflectance is determined based on the brightness and coordinate depth images acquired from each of the vantage points.

Подробнее
19-01-2017 дата публикации

VISUALISATION OF WORK STATUS FOR A MINE WORKSITE

Номер: US20170018115A1
Принадлежит: Caterpillar of Australia Pty

Described herein is a computer-implemented method for illustrating work status for an area of interest of a mine worksite. The method comprises determining a dataset comprising recorded data representing an elevation map of a surface of the mine worksite for at least the area of interest. The elevation map is based on measured data for the surface. The data set also comprises reference data representing a reference elevation topography for at least the area of interest. The method further comprises generating model data, based on the determined dataset, defining a 3-dimensional model for illustrating, in an image portraying a 3-dimensional view of the model, divergence between the elevation map and the reference elevation topography. 1. A computer-implemented method for illustrating work status for an area of interest of a mine worksite , wherein the method comprises: recorded data representing an elevation map of a surface of the mine worksite for at least the area of interest, the elevation map being based on measured data for the surface; and', 'reference data representing a reference elevation topography for at least the area of interest; and', 'generating model data, based on the determined dataset, defining a 3-dimensional model for illustrating, in an image portraying a 3-dimensional view of the model, divergence between the elevation map and the reference elevation topography., 'determining a dataset comprising2. The method according to claim 1 , wherein the reference elevation topography is a designed elevation topography that is intended for at least the area of interest.3. The method according to claim 1 , wherein the divergence represents differences in elevation between the elevation map and the reference elevation topography at respective positions in the area of interest.4. The method according to claim 3 , wherein differences in elevation are represented in the image as a bars claim 3 , each bar having a length that is indicative of a magnitude of ...

Подробнее
21-01-2016 дата публикации

METHOD AND APPARATUS FOR DISPLAYING POINT OF INTEREST

Номер: US20160019704A1
Принадлежит:

A method and apparatus for displaying a point of interest. In the embodiments of the present invention, by means of acquiring a location of a target object and then determining a visible point of interest at the location as a target point of interest, the target point of interest can be displayed. Since an invisible point of interest which cannot be seen at the location of the target object is no longer displayed, but a visible point of interest which can be seen at the location of the target object is displayed, the displayed point of interest can essentially satisfy the true locating intention of a user. Therefore, the problem in the prior art of the increase in data interaction between an application and a query engine caused by the user repeatedly querying via the application can be avoided, thereby reducing the processing burden of the query engine. 110-. (canceled)11. A method for displaying a point of interest , comprising:acquiring a location of a target object;determining a visible point of interest at the location as a target point of interest; anddisplaying the target point of interest.12. The method of claim 11 , further comprising:acquiring a street view image to be processed;performing recognition processing on the street view image to acquire a recognition result including text information about the street view image; andselecting a candidate point of interest matching the text information as a visible point of interest.13. The method of claim 12 , wherein said acquiring the street view image claim 12 , said performing the recognition processing and said selecting the candidate point of interest each occur prior to said determining the visible point of interest.14. The method of claim 12 , wherein said selecting the candidate point of interest includes:acquiring a degree of similarity between the text information and candidate points of interest to be matched; andselecting at least one of the candidate points of interest as visible points of interest ...

Подробнее
21-01-2016 дата публикации

CONTOUR COMPLETION FOR AUGMENTING SURFACE RECONSTRUCTIONS

Номер: US20160019711A1
Принадлежит:

Surface reconstruction contour completion embodiments are described which provide dense reconstruction of a scene from images captured from one or more viewpoints. Both a room layout and the full extent of partially occluded objects in a room can be inferred using a Contour Completion Random Field model to augment a reconstruction volume. The augmented reconstruction volume can then be used by any surface reconstruction pipeline to show previously occluded objects and surfaces.

Подробнее
21-01-2016 дата публикации

RETAIL SPACE PLANNING SYSTEM

Номер: US20160019717A1
Автор: PILON Charles, YOPP John
Принадлежит:

A three dimensional virtual retail space representing a physical space for designing a retail store space layout is provided. A three dimensional virtual object representing at least one physical object for the retail space is provided. Input can be received from a virtual reality input interface for interacting with the virtual object in the virtual retail space. Based on the input, the virtual object can be placed in the virtual retail space. An updated video signal can be sent to a head mounted display that provides a three dimensional representation of the virtual object in the virtual space.

Подробнее
03-02-2022 дата публикации

System and Method for Simulating an Immersive Three-Dimensional Virtual Reality Experience

Номер: US20220036659A1
Принадлежит: Individual

The present invention brings concerts directly to the people by streaming, preferably, 360° videos played back on a virtual reality headset and, thus, creating an immersive experience, allowing users to enjoy a performance of their favorite band at home while sitting in the living room. In some cases, 360° video material may not be available for a specific concert and the system has to fall back to traditional two-dimensional (2D) video material. For such cases, the present invention takes the limited space of a conventional video screen and expands it to a much wider canvas, by expanding color patterns of the video into the surrounding space. The invention may further provide seamless blending of the 2D medium into a 3D space and additionally enhancing the space with computer-generated effects and virtual objects that directly respond to the user's biometric data and/or visual and acoustic stimuli extracted from the played video.

Подробнее
03-02-2022 дата публикации

Augmented reality content rendering via albedo models, systems and methods

Номер: US20220036661A1
Принадлежит: NANT HOLDINGS IP LLC

Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.

Подробнее
18-01-2018 дата публикации

Techniques for Built Environment Representations

Номер: US20180018502A1
Принадлежит:

Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment. 1. A system for indoor mapping and navigation comprises:a reference mobile device including sensors to capture range, depth and position data with the mobile device including a depth perception unit a position estimator, a heading estimator, and an inertial measurement unit to process data received by the sensors from an environment, the reference mobile device further including a processor configured to:process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit;execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping; andintegrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment.2. The system of claim 1 , wherein the 2D or 3D object recognition technique is part of the 3D mapping process.3. The system of claim 1 , wherein reference device models are images or 3D data models or Building information modelling (BIM) data.4. The system of wherein the processor is further configured to:load RGB/RGB-D (three color+one depth) image/point cloud data set of a scene;choose interest points;compute scene ...

Подробнее
18-01-2018 дата публикации

System and Method for Generating Enhanced Stereographic Videos of Aircraft Build Processes

Номер: US20180018764A1
Принадлежит: The Boeing Company

Provided is a system and method for generating enhanced stereographic videos of aircraft build processes. Specifically, the system comprises a stereoscopic recording device configured to capture a plurality of stages of an aircraft build process. The system further comprises one or more processors, memory, and one or more programs stored in the memory that comprise instructions for execution by the system to build a stereographic library including repositories of D video corresponding to the plurality of stages of the aircraft build process. The system then generates an enhanced walkthrough video of the aircraft build process. The enhanced walkthrough video may include a parallax grid overlay and/or a thermal scan overlay integrated into the video. The system may then analyze the enhanced walkthrough video using post-processing analytics to identify anomalies and irregularities that occurred during the aircraft build process. 1. A system , comprising:a stereoscopic recording device configured to capture a plurality of stages of an aircraft build process;one or more processors;memory; and building a stereographic library including repositories of 3D video organized by tail number, the repositories of 3D video corresponding to the plurality of stages of the aircraft build process;', a parallax grid overlay integrated into the video, and', 'a thermal scan overlay integrated into the video; and, 'generating an enhanced walkthrough video of the aircraft build process, the enhanced walkthrough video including one or more of the following], 'one or more programs stored in the memory, the one or more programs comprising instructions foranalyzing the enhanced walkthrough video using post-processing analytics to identify anomalies and irregularities that occurred during the aircraft build process.2. The system of claim 1 , wherein the post-processing analytics includes analyzing patterns and shapes to detect foreign object damage.3. The system of claim 1 , wherein the post- ...

Подробнее
18-01-2018 дата публикации

INTEGRATED METHOD FOR THREE-DIMENSIONAL VISUALIZATION RECONSTRUCTION OF THE FASCICULAR STRUCTURE INSIDE HUMAN PERIPHERAL NERVES

Номер: US20180018816A1
Принадлежит:

The present invention relates to fields of clinical application of nerve defect repair and the medical three-dimensional (3D) printing technology, and provides an integrated visualization method for three-dimensional (3D) reconstruction of internal structure of human peripheral nerves. The method comprises the following steps: obtaining human peripheral nerves, preparing nerve specimens ex vivo by staining with an iodine preparation in combination with a freeze-drying method; scanning the pretreated peripheral nerves using Micro CT to acquire lossless two-dimensional images, and performing binarization processing to the two-dimensional images, then conducting image segmentation based on textural features to acquire images of nerve fascicles; finally, reconstructing the segmented images into a visualization model by using a supercomputer. 1. A constructing method for visualization models of human peripheral nerve fascicles , comprising the steps of:obtaining human peripheral nerves, and treating the peripheral nerves by staining with an iodine preparation in combination with a freeze-drying method;scanning the pretreated peripheral nerves by using Micro CT to acquire lossless two-dimensional images, and performing binarization processing of the two-dimensional images, then conducting image segmentation based on textural features to acquire images of nerve fascicles;reconstructing the images of nerve fascicles into visualization models.2. The constructing method according to claim 1 , wherein the peripheral nerves are fixed with a fixing agent before staining with the iodine preparation claim 1 , and the fixing agent is 3.5%-4.5% paraformaldehyde solution claim 1 , or 9%-11% glutaraldehyde solution.3. The constructing method according to claim 1 , wherein the iodine preparation is 40%-50% iodine solution.4. The constructing method according to claim 1 , wherein the peripheral nerves are wrapped with tinfoil and placed in liquid nitrogen for freezing before the freeze- ...

Подробнее
17-01-2019 дата публикации

Continuous time warp and binocular time warp for virtual and augmented reality display systems and methods

Номер: US20190019328A1
Принадлежит: Magic Leap, Inc.

Embodiments of the present disclosure relate to continuous and/or binocular time warping methods to account for head movement of the user without having to re-render a displayed image. Continuous time warping allows for transformation of an image from a first perspective to a second perspective of the viewer without having to re-render the image from the second perspective. Binocular time warp refers to the late-frame time warp used in connection with a display device including a left display unit for the left eye and a right display unit for the right eye where the late-frame time warp is performed separately for the left display unit and the right display unit. Warped images are sent to the left and the right display units where photons are generated and emitted toward respective eyes of the viewer, thereby displaying an image on the left and the right display units at the same time. 1. A method for transforming an image frame based on an updated position of a viewer , the method comprising:rendering, by a graphics processing unit at a first time, an image frame for a binocular near-eye display device, wherein the image frame corresponds to a first view perspective associated with a first position of the viewer, and wherein the image frame comprises a first image frame and a second image frame;receiving, by the graphics processing unit at a second time later than the first time, data associated with a second position of the viewer;transforming, by the graphics processing unit, at least a portion of the first image frame using the data associated with the second position of the viewer to generate an updated first image frame for a first display of the binocular near-eye display device;transforming, by the graphics processing unit, at least a portion of the second image frame to generate an updated second image frame for a second display of the binocular near-eye display device;transmitting, by the graphics processing unit, the updated first image frame to first ...

Подробнее
17-01-2019 дата публикации

Vector Graphics Rendering Techniques

Номер: US20190019333A1
Автор: Kumar Harish, Sud Anmol
Принадлежит: ADOBE SYSTEMS INCORPORATED

Vector graphics rendering techniques are described. Graphics processing units (GPUs) can render vector graphics images according to graphic trees having graphic leafs, each representing a graphics object (e.g., a shape) depicted in a vector graphics image. The described techniques involve generating groups of graphics objects depicted in an image such that graphics objects of a group have a same object type, e.g., shape. Transformations are determined that describe how to transform a first graphics object of a group to obtain other graphics objects of the group. The first graphics object is tessellated and a metadata buffer generated for the group having information indicative of the transformations. The metadata buffer is attached to a graphic leaf representing the first graphics object and graphic leafs representing the other graphics objects are removed from the graphic tree. The GPU renders objects by group based on the tessellated object and the metadata buffer's information. 1. In a digital medium environment to render an image on a graphics processing unit (GPU) , a method implemented by a computing device , the method comprising:generating, by the computing device, groups of graphics objects depicted in the image such that the graphics objects of a group have a same object type;determining, by the computing device, transformations for the graphics objects of the group, each of the transformations describing how to transform a first graphics object of the group to obtain another graphics object of the group;tessellating, by the computing device, the first graphics object of the group;generating, by the computing device, a metadata buffer for the group, the generated metadata buffer including information indicative of the determined transformations; andrendering, by the computing device, the image on the GPU, in part, by providing the tessellated object and the generated metadata buffer to the GPU for rendering.2. A method as described in claim 1 , wherein the ...

Подробнее
22-01-2015 дата публикации

SYSTEMS AND METHODS FOR IMAGE PROCESSING

Номер: US20150022550A1
Принадлежит: TRUPIK, INC.

Embodiments of the present disclosure can be used to generate an image replica of a person wearing various outfits to help the person visualize how clothes and accessories will look without actually having to try them on. Images can be generated from various angles to provide the person an experience as close as possible to actually wearing the clothes, accessories and looking at themselves in the mirror. Among other things, embodiments of the present disclosure can help remove much of the current uncertainty involved in buying clothing and accessories online. 1. A computer-implemented method comprising:receiving, by a computer system over a network, a first image of a human subject from an image creation device, the first image including a portion of the subject's body;determining, based on the first image, dimensions of the subject's body;receiving, by the computer system over the network, a second image of the subject from the image creation device, the second image including the subject's head; and ["an image of the subject's body based on the determined dimensions; and", "an image of the subject's head based on the second image."], 'generating a third image by the computer system, the third image including2. The method of claim 1 , wherein generating the third image includes merging the image of the subject's body and the image of the subject's head using a uniform morphing technique between a lower portion of the subject's head and an upper portion of the subject's body.3. The method of claim 2 , wherein the lower portion of the subject's head includes the subject's chin claim 2 , and wherein the upper potion of the subject's body includes the subject's chest.4. The method of claim 1 , wherein the subject is wearing clothes in one or more of the first image and the second image.5. The method of claim 1 , wherein the image creation device comprises a three-dimensional camera.6. The method of claim 1 , wherein determining the dimensions of the subject's body is ...

Подробнее
16-01-2020 дата публикации

PARAMETERIZING 3D SCENES FOR VOLUMETRIC VIEWING

Номер: US20200020151A1

A target view to a 3D scene depicted by a multiview image is determined. The multiview image comprising sampled views at sampled view positions distributed throughout a viewing volume. Each sampled view in the sampled views comprises a wide-field-of-view (WFOV) image and a WFOV depth map as seen from a respective sampled view position in the sampled view positions. The target view is used to select, from the sampled views, a set of sampled views. A display image is caused to be rendered on a display of a wearable device. The display image is generated based on a WFOV image and a WFOV depth map for each sampled view in the set of sampled views. 1. A method for selecting sampled views , comprising:determining a target view depicted by a multiview image, the multiview image comprising a plurality of sampled views at a plurality of sampled view positions distributed throughout a volume, each sampled view in the plurality of sampled views of the multiview image comprising a first image and a depth map corresponding to the first image, each sampled view of the multiview image in the plurality of sampled views of the multiview image corresponding to a respective sampled view position in the plurality of sampled view positions;selecting from the plurality of sampled views of the multiview image, a set of sampled views, each sampled view in the plurality of sampled views corresponding to a respective viewpoint to the 3D scene;rendering a display image on a display of a wearable device, the display image being generated based on one or more portions of the first image and one or more portions of the depth map for each such sampled view in the set of sampled views.2. The method of claim 1 , wherein the multiview image is a part of a sequence of multiview images indexed by a sequence of time instants.3. The method of claim 1 , wherein the target view is determined based on a spatial position and a spatial direction of a wearable device operating in conjunction with the ...

Подробнее
23-01-2020 дата публикации

MULTI-CAMERA DRIVER ASSISTANCE SYSTEM

Номер: US20200023773A1
Автор: ABHAU Jochen
Принадлежит:

Disclosed herein is a multi-camera driver assistance system. The system includes a plurality of cameras which dispose at different positions of a vehicle to capture images of a vicinity of the vehicle; an image processing unit which generates a virtual view with respect to a predetermined projection surface based on the images; and a display device which displays the virtual view, wherein the predetermined projection surface includes a slanted projection surfaces which are located at lateral sides of the vehicle. 17-. (canceled)8. A vision system for a vehicle , the vision system comprising:a camera operable to be disposed at the vehicle so as to have a field of view exterior of the vehicle, the camera configured to capture an image data; andan image processor configured to process the image data captured by the camera,wherein the image processor, responsive at least in part to image processing of image data, is configured to output a first virtual view including a first flat projection surface extending from the vehicle by a first distance and a first slanted projection surface positioned at the first distance from the vehicle, andin response to a user input, the image processor is output a second virtual view including a second flat projection surface extending from the vehicle by a second distance and a second slanted projection surface positioned at the second distance from the vehicle.9. The vision system of claim 8 , wherein claim 8 , in response to a first user input claim 8 , the image processor is output the second flat projection surface extending from the vehicle by the second distance claim 8 , which is greater than the first distance claim 8 , and the second slanted projection surface positioned at the second distance.10. The vision system of claim 8 , wherein claim 8 , in response to a second user input claim 8 , the image processor is output the second flat projection surface extending from the vehicle by the second distance claim 8 , which is less ...

Подробнее
26-01-2017 дата публикации

REAL-TIME HIGH-QUALITY FACIAL PERFORMANCE CAPTURE

Номер: US20170024921A1
Принадлежит:

A method of transferring a facial expression from a subject to a computer-generated character and a system and non-transitory computer-readable medium for the same. The method can include receiving an input image depicting a face of a subject; matching a first facial model to the input image; generating a displacement map representing of finer-scale details not present in the first facial model using a regression function that estimates the shape of the finer-scale details. The displacement map can be combined with the first facial model to create a second facial model that includes the finer-scale details, and the second facial model can be rendered, if desired, to create a computer-generated image of the face of the subject that includes the finer-scale details. 1. A method of transferring a facial expression from a subject to a computer-generated character , the method comprising:receiving an input image depicting a face of a subject;matching a first facial model to the input image;generating a displacement map representing the finer-scale details not present in the first facial model using a regression function that estimates the shape of the finer-scale details.2. The method set forth in further comprising combining the displacement map with the first facial model to create a second facial model that includes the finer-scale details.3. The method set forth in wherein the finer-scale details comprise one or more wrinkles.4. The method set forth in wherein prior to generating the displacement map claim 3 , the regression function is trained from data representing a plurality of expressions from a plurality of different subjects.5. The method set forth in wherein training the regression function comprises:for each expression in the plurality of expressions, generating an image texture of the expression and an expression displacement map that encodes wrinkle information for the expression; extracting a plurality of wrinkle patches at a plurality of different ...

Подробнее
26-01-2017 дата публикации

THREE-DIMENSIONAL SURFACE TEXTURING

Номер: US20170024925A1
Принадлежит:

A scanned texture can be applied to a three-dimensional model using a scanner. A user can scan a surface texture with a three-dimensional scanner and then use the same scanner as a three-dimensional input device to apply the texture to a three-dimensional model displayed in a virtual modeling environment. To accomplish this, the surface texture may first be isolated and extracted from a scanned surface. The surface texture can then be applied to a three-dimensional model in a virtual workspace by using the scanner as a navigational and control input. Thus, in a similar manner and motion in which a real-world object is scanned, the surface texture can be applied to the digital model displayed in the virtual modeling environment. The scanner therefore provides a user with a simple and intuitive way in which to capture physical surface textures and apply them to digital objects. 1. A method comprising:capturing a three-dimensional scan of a surface with a handheld scanner;isolating a surface texture of the surface comprising a three-dimensional texture independent of an aggregate shape of the surface;displaying a digital model of an object within a virtual modeling environment;receiving spatial input from the handheld scanner to navigate to a pose within the virtual modeling environment; andin response to user input, applying the surface texture to the digital model according to the pose.2. The method of further comprising fabricating the digital model with the surface texture using a three-dimensional printer.3. The method of wherein the handheld scanner is a laser line scanner.4. The method of wherein the handheld scanner acquires three-dimensional data using one or more of structured light claim 1 , shape from motion claim 1 , and range finding.5. The method of wherein the user input includes pushing a button on the handheld scanner.6. The method of wherein isolating the surface texture includes:low-pass filtering the surface to provide a filtered surface;warping ...

Подробнее
28-01-2016 дата публикации

Methods for Capturing Images of a Control Object and Tracking to Control Interfacing with Video Game Objects

Номер: US20160027188A1
Автор: Marks Richard L.
Принадлежит:

Methods for real time motion capture for controlling an object in a video game are provided. One method includes defining a model of a control object and identifying a marker on the control object. The method also includes capturing movement associated with the control object with a video capture device. Then, interpreting the movement associated with the control object to change a position of the model based on data captured through the video capture device, wherein the data captured includes the marker. The method includes moving the video game object presented on the display screen in substantial real-time according to the change of position of the model. 1. A method for real-time motion capture for control of a video game object during game play , comprising:defining a model of a control object;identifying a marker on the control object;capturing movement associated with the control object with a video capture device; andinterpreting the movement associated with the control object to change a position of the model based on data captured through the video capture device, the data captured including the marker; andmoving the video game object presented on the display screen in substantial real-time according to the change of position of the model.2. The method of claim 1 , wherein the control object is a hand of a person claim 1 , the hand being tracked over time to capture changes claim 1 , such that changes in the hand being tracked enable manipulations of the video game object claim 1 , wherein the manipulations include the moving.3. The method of claim 1 , wherein the method operation of capturing movement associated with the control object includes claim 1 ,capturing movement associated with an object being controlled by the control object.4. The method of claim 1 , further comprising:continuing to capture movement associated with the control object, interpret the movement associated with the control object to change a position of the model and control ...

Подробнее
28-01-2016 дата публикации

METHOD AND DEVICE FOR ADJUSTING SKIN COLOR

Номер: US20160027191A1
Принадлежит:

A method for a device to adjust a skin color of an image includes: identifying a skin color region of the image; performing a statistical calculation on original color data of pixels in the skin color region to obtain an original mean value and an original standard deviation of the original color data of the pixels in the skin color region; selecting a preset skin color model from one or more preset skin color models each representing one skin color type, according to the original mean value and a preset mean value of each of the one or more preset skin color models; determining target color data according to the original color data, the original mean value, the original standard deviation, and the preset mean value and a preset standard deviation of the selected skin color model; and adjusting the skin color region according to the target color data. 1. A method for a device to adjust a skin color of an image , comprising:identifying a skin color region of the image;performing a statistical calculation on original color data of pixels in the skin color region to obtain an original mean value and an original standard deviation of the original color data of the pixels in the skin color region;selecting a preset skin color model from one or more preset skin color models each representing one skin color type, according to the original mean value and a preset mean value of each of the one or more preset skin color models;determining target color data according to the original color data, the original mean value, the original standard deviation, and the preset mean value and a preset standard deviation of the selected skin color model; andadjusting the skin color region according to the target color data.2. The method according to claim 1 , wherein the selecting of the preset skin color model comprises:calculating a difference between the original mean value and the preset mean value of each of the one or more preset skin color models; andselecting the preset skin color ...

Подробнее
28-01-2016 дата публикации

COMPUTERIZED IMAGING OF SPORTING TROPHIES AND USES OF THE COMPUTERIZED IMAGES

Номер: US20160027232A1
Автор: Krien David
Принадлежит:

Methods are disclosed for providing replicas of a sporting trophy and for scoring the sporting trophy. The first method includes providing a sporting trophy to be scanned, scanning the sporting trophy to provide three-dimensional image data of the sporting trophy, and providing the three-dimensional image data of the sporting trophy to a replica generating system to provide a replica of the sporting trophy. The second method includes providing three-dimensional digital data of a sporting trophy having a volume and a surface area, providing at least one sporting-relevant measurement based on the three-dimensional data of the sporting trophy, and providing a score of the sporting trophy based on the at least one sporting-relevant measurement. 1. A method of providing a video game , comprising the steps of:providing at least one three-dimensional image data of a sporting trophy;providing video game software;incorporating the at least one three-dimensional image data of the sporting trophy into the video game software to provide a personalized video game software; andproviding at least one video game compatible with at least one gaming system, the at least one video game incorporating the personalized video game software.2. The method of wherein the three-dimensional image data of a sporting trophy comprises the steps of:providing a sporting trophy to a computerized scanner;engaging the scanner to obtain three-dimensional image data indicative of the sporting trophy; andstoring the three-dimensional image data.3. The method of wherein the at least one three-dimensional image data of a sporting trophy includes information specific to the sporting trophy.4. The method of wherein the information relating to the sporting trophy is selected from the group consisting of type of sporting trophy claim 3 , any volume claim 3 , any surface area claim 3 , any linear measurements claim 3 , any weights claim 3 , any sporting-relevant measurement claim 3 , any internal length of a ...

Подробнее
25-01-2018 дата публикации

Automated scan planning for follow-up magnetic resonance imaging

Номер: US20180025466A1
Принадлежит: Koninklijke Philips NV

The invention provides for a mri system ( 100 ) for acquiring magnetic resonance data ( 158, 168 ) from a subject. The execution of machine-executable instructions ( 180, 182, 184, 186, 188 ) causes a processor ( 144 ) to: receive ( 300 ) a baseline medical image ( 152 ) data descriptive of one or more internal structures ( 126 ) of the subject; receive ( 302 ) a baseline scan geometry ( 154 ); acquire ( 304 ) survey magnetic resonance data ( 158 ) from the subject by controlling the magnetic resonance imaging system with survey pulse sequence data, wherein the survey pulse sequence data comprises instructions for controlling the magnetic resonance imaging system to acquire magnetic resonance data descriptive of a three-dimensional volume ( 124 ) of the subject; reconstruct ( 306 ) the survey magnetic resonance data into a three-dimensional survey image ( 160 ); calculate ( 308 ) location data by processing the three dimensional survey image with an organ detection algorithm ( 182 ), wherein the location data is descriptive of a target region ( 128 ); assign ( 310 ) a predefined region of interest ( 130 ) to the three dimensional survey image using the location data; calculate ( 312 ) registration data ( 164 ) by registering the baseline medical image to the three dimensional survey image.

Подробнее
25-01-2018 дата публикации

Displaying and interacting with scanned environment geometry in virtual reality

Номер: US20180025534A1
Принадлежит: Google LLC

Techniques of displaying a virtual environment in a HMD involve generating a lighting scheme within a virtual environment configured to reveal a real object in a room in the virtual environment in response to a distance between a user in the room and the real object decreasing while the user is immersed in the virtual environment. Such a lighting scheme protects a user from injury resulting from collision with real objects in a room while immersed in a virtual environment.

Подробнее
25-01-2018 дата публикации

Hölder Adaptive Image Synthesis

Номер: US20180025535A1
Принадлежит:

Computer implemented method for rendering an image of a three-dimensional scene on an image plane by encoding at least a luminosity in the image plane by a luminosity function. The value of the luminosity can be computed at substantially each point of the image plane by using a set of stored input data describing the scene. The method includes constructing the luminosity function as equivalent to a first linear combination involving the functions of a first set of functions, and computing at least the value of the coefficients of the first linear combination, by solving a first linear system, obtained by using at least the functions of the first linear combination, at least a subset of the first subset of the image plane, and the luminosity at the points of said subset. The method further includes storing the value of the coefficients of the first linear combination and at least the information needed to associate each coefficient to the function multiplying said coefficient in the first linear combination. The first set of functions comprises each function of a second set of functions satisfying a selection condition, which depends at least on the set of stored input data. Moreover, the points of the first subset are distributed according to a first distribution criterion, which depends on the location of the support of at least a function of the first set of functions. 1. A computer-implemented method for rendering at least partially an image of a three-dimensional scene on an image plane (I) by encoding at least a luminosity in the image plane (I) by a luminosity function (FL) defined on the image plane (I) , wherein the value of the luminosity can be computed at substantially each point of the image plane (I) by using a set of stored input data describing the scene at least in part , said method comprising:{'b': '1', 'a) constructing the luminosity function (FL) in at least a region of the image plane (I) as equivalent to a first linear combination involving the ...

Подробнее
25-01-2018 дата публикации

Portable Globe Creation for a Geographical Information System

Номер: US20180025537A1
Принадлежит:

Portable globes may be provided for viewing regions of interest in a Geographical Information System (GIS). A method for providing a portable globe for a GIS may include determining one or more selected regions corresponding to a geographical region of a master globe. The method may further include organizing geospatial data from the master globe based on the selected region and creating the portable globe based on the geospatial data. The portable globe may be smaller in data size than the master globe. The method may include transmitting the portable globe to a local device that may render the selected region at a higher resolution than the remainder of the portable globe in the GIS. A system for providing a portable globe may include a selection module, a fusion module and a transmitter. A system for updating a portable globe may include a packet bundler and a globe cutter. 120.-. (canceled)21. A method for providing geospatial data on a local device , the geospatial data obtained from a remote device located remotely from the local device , the method comprising:querying, by one or more processors, a database for one or more geographical coordinates associated with a region;accessing, by the one or more processors, geospatial data in response to the query, the geospatial data obtained from a remote device such that geospatial data associated with the region accessible by the local device is associated with a higher resolution than geospatial data associated with a geographic area located outside the region;rendering, by the one or more processors, the geospatial data at the local device during a period of reduced network connectivity between the local device and the remote device;wherein the geospatial data associated with the region is capable of being rendered at a higher resolution at the local device than geospatial data associated with the geographic area outside the region.22. The method of claim 21 , wherein the geospatial data is obtained over a network ...

Подробнее
25-01-2018 дата публикации

METHODS AND SYSTEMS FOR 3D CONTOUR RECOGNITION AND 3D MESH GENERATION

Номер: US20180025540A1
Автор: Fei Yue, MA Gengyu, Wang Yuan
Принадлежит:

A system for computer vision is disclosed. The system may comprise a processor and a non-transitory computer-readable storage medium coupled to the processor. The non-transitory computer-readable storage medium may store instructions that, when executed by the processor, cause the system to perform a method. The method may comprise obtaining a first and a second images of at least a portion of an object, extracting a first and a second 2D contours of the portion of the object respectively from the first and second images, matching one or more first points on the first 2D contour with one or more second points on the second 2D contour to obtain a plurality of matched contour points and a plurality of mismatched contour points, and reconstructing a shape of the portion of the object based at least in part on at least a portion of the matched points and at least a portion of the mismatched contour points. 1. A system for computer vision , comprising:a processor; and obtaining a first and a second images of at least a portion of an object;', 'extracting a first and a second 2D contours of the portion of the object respectively from the first and second images;', 'matching one or more first points on the first 2D contour with one or more second points on the second 2D contour to obtain a plurality of matched contour points and a plurality of mismatched contour points; and', 'reconstructing a shape of the portion of the object based at least in part on at least a portion of the matched points and at least a portion of the mismatched contour points., 'a non-transitory computer-readable storage medium coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform2. The system of claim 1 , further comprising at least two cameras coupled to the processor claim 1 , wherein:obtaining the first and the second images comprises the at least two cameras respectively capturing the first and second images; andthe first and second ...

Подробнее
25-01-2018 дата публикации

METHOD FOR AUTOMATIC MODELING OF COMPLEX BUILDINGS WITH HIGH ACCURACY

Номер: US20180025541A1
Принадлежит:

The present invention relates to a high-accuracy automatic 3D modeling method for complex buildings, comprising the steps of: transforming the complex building to a complex polygon by using the topological structure of polygons firstly, transforming complex polygons to a set of triangles which are seamlessly spliced by programming an algorithm and accomplishing high-accuracy automatic 3D modeling of buildings. 1. A high-accuracy automatic 3D modeling method for complex buildings , comprising the steps of:reading and preprocessing data of a building;extracting information for modeling from the preprocessed data; andmodeling a main body and a roof of the building by using the information for modeling acquired in the extracting step;wherein the main body of the building is modeled based on plane coordinates and elevation data of a boundary of the building acquired from the information for modeling;wherein an order in which nodes of the boundary of the building are plotted in the information for modeling is rearranged and then input to a 3D engine for modeling;characterized in that the roof is modeled by determining the shape of the roof based on the plane coordinates of the boundary of the roof acquired from the information for modeling, processing the boundary of the roof based on the determination, and modeling the roof based on the shape of the roof and the elevation data without addition of nodes of the boundary.2. The method according to claim 1 , characterized in that in the reading and preprocessing step claim 1 , the data of the building are acquired by reading a file containing the data of the building with specific software and the preprocessing involves checking topology claim 1 , merging adjacent points claim 1 , normalizing object attributes and assigning elevation values over the data of the building.4. The method according to claim 1 , characterized in that in the extracting step claim 1 , the information for modeling includes information of 3D ...

Подробнее
28-01-2016 дата публикации

ASSISTED TEXT INPUT FOR COMPUTING DEVICES

Номер: US20160028945A1
Принадлежит:

Various approaches provide for detecting and recognizing text to enable a user to perform various functions or tasks. For example, a user could point a camera at an object with text, in order to capture an image of that object. The camera can be integrated with a portable computing device that is capable of taking the image and processing the image (or providing the image for processing) to recognize, identify, and/or isolate the text in order to send the image of the object as well as recognized text to an application, function, or system, such as an electronic marketplace. 1. A computing device , comprising:a first camera;a second camera separated a distance from the first camera;a display element;at least one processor; and capture first image data of an object by the first camera, the object including text displayed on a surface of the object;', 'capture second image data of the object by the second camera;', 'process the first image data to recognize the text displayed on the surface of the object;', 'determine a set of words from the recognized text;', 'generate using the first image data and the second image data a three-dimensional representation of the object;', 'display an interface that includes a selectable list of a subset of the set of words; and', 'enable selection of a word from the selectable list to be displayed with the three-dimensional representation of the object., 'a memory device including instructions that, when executed by the at least one processor, cause the computing device to2. The computing device of claim 1 , wherein the instructions claim 1 , when executed to process the first image data claim 1 , further cause the computing device to:determine a foreground region of the first image data and a background region of the first image data;identify a representation of the object;determine that the representation of the object is in the foreground region; andcrop the background region from the first image data.3. The computing device of ...

Подробнее
29-01-2015 дата публикации

DESIGN OF A PATH CONNECTING A FIRST POINT TO A SECOND POINT IN A THREE-DIMENSIONAL SCENE

Номер: US20150029181A1
Автор: Lerey Guillaume
Принадлежит:

It is proposed a computer-implemented method, system and program product designing a path connecting a first point to a second point in a three-dimensional scene. The method comprises: 1. Computer-implemented method for designing a path connecting a first point to a second point in a three-dimensional scene , comprising:providing the first point coupled with a first vector;providing the second point coupled with a second vector; andproviding a set of paths by following at the most three portions of a parallelepiped, the parallelepiped comprising the provided first point on a first vertex and the provided second point on a second vertex, a portion of the parallelepiped being an edge, a diagonal of a face, a space diagonal.2. The computer-implemented method of claim 1 , wherein the first and second vertices are on opposite faces of the parallelepiped claim 1 , the set of paths comprises:one path comprising one portion made of the space diagonal connecting the first and second points;six paths comprising two portions having a common vertex of the parallelepiped, one portion being a diagonal of a face of the parallelepiped, the diagonal comprising the first or the second points, and one portion being an edge; andsix paths comprising three consecutive portions, each portion being an edge.3. The computer-implemented method of claim 1 , wherein the first and second vertices belong to a same face of the parallelepiped claim 1 , the set of path comprises:one path comprising one portion being a diagonal of the face the first and second vertices belong to; andtwo paths comprising two consecutive portions, each portion being an edge.4. The computer-implemented method of claim 1 , further comprising orienting the parallelepiped in the three-dimensional scene about one of the following orientation:one of the directions of the global orientation of the three-dimensional scene;a direction provided by the first vector; anda direction provided by the second vector.5. The computer- ...

Подробнее
24-01-2019 дата публикации

SYSTEM AND METHOD FOR INTERACTIVE VIRTUAL LIGHTING OF A VIRTUAL SAMPLE REPRESENTATIVE OF A REAL-LIFE MANUFACTURED OBJECT

Номер: US20190026937A1
Принадлежит:

A system for interactive virtual lighting of a virtual sample representative of a real-life manufactured object, based on data relative to the real-life manufactured object. A lighting calibration module generates user lighting condition data representative of current lighting conditions and adjusts parameters of a virtual light source according thereto. A user interaction module captures displacement inputs from the electronic graphical communication device and generates user interaction data therefrom used by a real time rendering engine to move the virtual sample. The real time rendering engine simulates light interaction from the virtual light source with the virtual sample and processes the light interaction data to simulate light interaction from the virtual light with the virtual sample. A computer implemented method for interactive virtual lighting of a virtual sample representative of a real-life manufactured object is also provided. 1. A system for interactive virtual lighting of a virtual sample representative of a real-life manufactured object , on an electronic graphical communication device operatively connected to a display monitor , based on light interaction data and material color data relative to the real-life manufactured object , the system comprising:a lighting calibration module generating user lighting condition data representative of the current lighting conditions of an immediate environment of a user, the lighting calibration module being configured to adjust at least one parameter of a virtual light source lighting the virtual sample according to the lighting condition data;a user interaction module capturing real-time virtual object displacement inputs from the electronic graphical communication device and generating real-time user interaction data therefrom; anda real time rendering engine simulating light interaction from the virtual light source with the virtual sample, the real time rendering engine repeatedly moving the virtual ...

Подробнее
28-01-2021 дата публикации

Method and Device for Generating an Unmanned Aerial Vehicle Flight Trajectory, Computer Apparatus and Storage Medium

Номер: US20210026377A1
Автор: Huang Hui, ZHOU Xiaohui
Принадлежит:

A method for generating a UAV flight trajectory. The method includes acquiring map data of a to-be-reconstructed area; the map data includes a satellite map image and a plane map image; acquiring building silhouettes of each building in a to-be-reconstructed area from a plane map image; acquiring height data of each building from a satellite map image; generating an initial model according to the building silhouettes and height data of each building; determining a view set according to an initial model; generating a flight trajectory from a view set. The initial model can be established without pre-flight of the to-be-reconstructed area, and the UAV flight trajectory can be directly generated, so that the calculation amount of data is reduced. 1. A method for generating an unmanned aerial vehicle (UAV) flight trajectory , comprising:acquiring map data of a to-be-reconstructed area, the map data comprising a satellite map image and a plane map image;acquiring building silhouettes corresponding to each of buildings in the to-be-reconstructed area according to the plane map image;acquiring height data corresponding to each of the buildings according to the satellite map image;generating an initial model according to the building silhouettes and the height data corresponding to each of the buildings;determining a view set according to the initial model, the view set comprising a plurality of views when a UAV photographs the to-be-reconstructed area; andgenerating a flight trajectory according to the view set.2. The method according to claim 1 , wherein the determining the view set according to the initial model comprises:acquiring a viewing distance, a field of view, and a view overlapping rate, and generating a Poisson sampling diameter;serving the initial model and the Poisson sampling diameter as inputs, generating a sampling point set by a Poisson distribution sampling algorithm, the sampling point set comprising a plurality of sampling points, each of the sampling ...

Подробнее
23-01-2020 дата публикации

AUGMENTED REALITY SYSTEM AND COLOR COMPENSATION METHOD THEREOF

Номер: US20200027201A1
Автор: Chen Li-Jen
Принадлежит: WISTRON CORPORATION

An augmented reality system and a color compensation method thereof are proposed. The method is applicable to an augmented reality system having a display and an image sensor and include the following steps. A preset object position of a virtual object with respect to an actual scene is set. An image of the actual scene is captured by using the image sensor, and the image of the actual scene is mapped to a field of view of the display to generate a background image with respect to the field of view of the display. Color compensation is performed on the virtual object according to a background overlapping region corresponding to the preset object position in the background image to generate an adjusted virtual object, and the adjusted virtual object is displayed on the display according to the preset object position. 1. A color compensation method , applicable to an augmented reality system having a display being light-transmissive and an image sensor , comprising:setting a preset object position of a virtual object with respect to an actual scene;capturing an image of the actual scene by using the image sensor, and mapping the image of the actual scene to a field of view (FOV) of the display to generate a background image with respect to the FOV of the display;performing color compensation on the virtual object according to a background overlapping region corresponding to the preset object position in the background image to generate an adjusted virtual object; anddisplaying the adjusted virtual object on the display according to the preset object position.2. The method according to claim 1 , wherein the augmented reality system further comprises a depth sensor claim 1 , and wherein before the step of setting the preset object position of the virtual object with respect to the actual scene claim 1 , the method further comprises:computing a depth map of the actual scene by using at least the depth sensor; andgenerating absolute coordinates of the actual scene by ...

Подробнее
23-01-2020 дата публикации

Method and Apparatus for Representing a Virtual Object in a Real Environment

Номер: US20200027276A1
Автор: Holzer Stefan, Meier Peter
Принадлежит:

The invention relates to a method for representing a virtual object in a real environment, having the following steps: generating a two-dimensional image of a real environment by means of a recording device, ascertaining a position of the recording device relative to at least one component of the real environment, segmenting at least one area of the real environment unmarked in reality in the two-dimensional image for identifying at least one segment of the real environment in distinction to a remaining part of the real environment while supplying corresponding segmentation data, and merging the virtual object with the two-dimensional image of the real environment with consideration of the segmentation data such that at least one part of the segment of the real environment is removed from the image of the real environment. The invention permits any collisions of virtual objects with real objects that occur upon merging with a real environment to be represented in a way largely close to reality. 1. A method for representing a virtual object in a real environment , comprising:capturing, by a recorder, an image of a real environment;determining position information for the recorder relative to at least one component of the real environment;obtaining three-dimensional depth information relating to the real environment based on the position information;presenting a virtual object in the real environment such that at least one part of the real environment is removed from the image of the real environment;selecting a texture source using the three-dimensional information for an area of the real environment adjacent to the removed part of the real environment to select a texture source;identifying texture information from the selected texture source; andconcealing the removed part of the real environment using the identified texture information.2. The method of claim 1 , further comprising:segmenting the area of the captured real environment;wherein the virtual object is ...

Подробнее
24-01-2019 дата публикации

Service indicator display method and device

Номер: US20190028358A1
Принадлежит: Huawei Technologies Co Ltd

The present disclosure provides a service indicator display method and device. The method includes: obtaining measurement values of service indicators in a building and a three-dimensional grid model of the building, where an outer surface of the model includes multiple polygons; determining, according to the measurement values of the service indicators, a measurement value that is of a service indicator and that is corresponding to a vertex location of each polygon; performing gradient rendering on each polygon according to a legend and the measurement value, to obtain spatial distribution of the service indicators; and displaying the spatial distribution in the building. A surface of a building model is divided more finely by using a polygon, and spatial location distribution of service indicators is reflected more truly by means of gradient rendering, so as to improve network optimization efficiency.

Подробнее
28-01-2021 дата публикации

METHOD AND SYSTEM FOR CREATING ANIMAL TYPE AVATAR USING HUMAN FACE

Номер: US20210027514A1
Принадлежит: LINE Plus Corporation

Disclosed are methods, systems and apparatuses for creating an animal-shaped avatar using a human face. An avatar creation method according to example embodiments includes analyzing an image including a human face and automatically creating an animal-shaped avatar corresponding to the human face. 1. An avatar creation method comprising:creating a plurality of first measurement value sets respectively corresponding to a plurality of animal images, each of which includes a corresponding animal face, by quantifying the corresponding animal face in each of the plurality of animal images;storing, in a database, each of the plurality of first measurement value sets in association with a corresponding animal classification from among a plurality of animal classifications;creating a plurality of basic models respectively corresponding to the plurality of animal classifications;determining an animal classification, from among the plurality of animal classifications, which corresponds to a human face by receiving a second measurement value set created by quantifying the human face and by comparing the second measurement value set and the plurality of first measurement value sets stored in the database;identifying a basic model from among the plurality of basic models which corresponds to the determined animal classification; andprocessing the identified basic model based on the second measurement value set to provide an animal-shaped avatar corresponding to the human face.2. The avatar creation method of claim 1 , wherein the creating of the plurality of first measurement value sets comprises extracting measurement values with respect to facial components of the corresponding animal face in each of the plurality of animal images claim 1 , andwherein the second measurement value set is created by extracting measurement values with respect facial components of the human face.3. The avatar creation method of claim 1 , wherein the creating of the plurality of first measurement ...

Подробнее
28-01-2021 дата публикации

ALLOCATION OF PRIMITIVES TO PRIMITIVE BLOCKS

Номер: US20210027519A1
Принадлежит:

An application sends primitives to a graphics processing system so that an image of a 3D scene can be rendered. The primitives are placed into primitive blocks for storage and retrieval from a parameter memory. Rather than simply placing the first primitives into a primitive block until the primitive block is full and then placing further primitives into the next primitive block, multiple primitive blocks can be “open” such that a primitive block allocation module can allocate primitives to one of the open primitive blocks to thereby sort the primitives into primitive blocks according to their spatial positions. By grouping primitives together into primitive blocks in accordance with their spatial positions, the performance of a rasterization module can be improved. For example, in a tile-based rendering system this may mean that fewer primitive blocks need to be fetched by a hidden surface removal module in order to process a tile. 1. A method of processing primitives in a computer graphics processing system in which primitives are allocated to primitive blocks at a primitive block allocation module of a computer graphics processing system , which includes a data store for storing a set of primitive blocks to which primitives can be allocated , wherein a primitive block is configured to store primitive data , the method comprising: (i) comparing an indication of a spatial position of the received primitive with at least one indication of a spatial position of at least one primitive block that is stored in the data store, and', '(ii) allocating the received primitive to a primitive block based on a result of the comparison, such that the received primitive is allocated to a primitive block in accordance with its spatial position; and, 'for each of a plurality of received primitivesprocessing primitive blocks including allocated primitives in the computer graphics processing system.2. The method of claim 1 , wherein said processing primitive blocks comprises ...

Подробнее
01-02-2018 дата публикации

Remote controlled vehicle with augmented reality overlay

Номер: US20180028931A1
Принадлежит: MONKEYmedia Inc

In some embodiments, extemporaneous control of remote objects can be made more natural using the invention, enabling a participant to pivot, tip and aim a head-mounted display apparatus to control a remote-controlled toy or full-sized vehicle, for example, hands-free. If the vehicle is outfitted with a camera, then the participant may see the remote location from first-person proprioceptive perspective.

Подробнее
02-02-2017 дата публикации

PORTABLE PROPRIOCEPTIVE PERIPATETIC POLYLINEAR VIDEO PLAYER

Номер: US20170031391A1
Принадлежит:

Departing from one-way linear cinema played on a single rectangular screen, this multi-channel virtual environment involves a cinematic paradigm that undoes habitual ways of framing things, employing architectural concepts in a polylinear video/sound construction to create a type of experience that allows the world to reveal itself and permits discovery on the part of participants. Techniques are disclosed for peripatetic navigation through virtual space with a handheld computing device, leveraging human spatial memory to form a proprioceptive sense of location, allowing a participant to easily navigate amongst a plurality of simultaneously playing videos and to center in front of individual video panes in said space, making it comfortable for a participant to rest in a fixed posture and orientation while selectively viewing any one of the video streams, and providing spatialized 3D audio cues that invite awareness of other content unfolding simultaneously in the virtual environment. 1. One or more computer readable media comprising instructions that when executed by a computer are capable of causing the computer to:a. generate a virtual environment;b. establish a location of a virtual camera in the virtual environment;c. establish an orientation of the virtual camera in the virtual environment;d. update the location of the virtual camera in the virtual environment using x-axisometer data and an x-axisometer sensor reference data; ande. update the orientation of the virtual camera in the virtual environment using v-axisometer data and a v-axisometer sensor reference data.2. The computer readable media of claim 1 , wherein the virtual environment comprises a plurality of video panes and a plurality of virtual speakers claim 1 , wherein:a. a plurality of videos play simultaneously in distinct locations in the virtual environment; andb. a plurality of sounds are produced for display as if coming from distinct locations in the virtual environment.3. The computer ...

Подробнее
02-02-2017 дата публикации

SYSTEM FOR COMPOSITING EDUCATIONAL VIDEO WITH INTERACTIVE, DYNAMICALLY RENDERED VISUAL AIDS

Номер: US20170032562A1
Принадлежит: GOOGLE INC.

A framework includes a scene display section configured to display a scene that includes a background layer, a video layer, and a three dimensional graphics layer on top of the video layer; and a rendering module configured as a gatekeeper that adds and removes objects to be included for rendering in the three dimensional graphics layer. The framework includes a video module configured to track playback timing of the video; and a moment module, for creating a data model for a moment having a start time, end time, identifier, and a state, configured to update the state of the moment based on the video playback timing, identified by the identifier and in accordance with the start time and the end time. Objects that are added to be included in rendering, check the state of an associated the moment, and when the state of the moment is enabled, update their display state. 1. A framework for compositing video with three dimensional graphics in a Web browser , comprising:one or more data processing devices;at least one graphics processing unit;a display device;said one or more data processing devices configured to perform playback of the video, andthe Web browser configured to send instructions to the graphics processing unit;a memory device storing a program to be executed in the Web browser;said program including:a scene display section configured to display a scene that includes a background layer, a video layer, and a three dimensional graphics layer on top of the video layer; anda rendering module configured as a gatekeeper that adds and removes objects to be included for rendering in the three dimensional graphics layer.2. The framework of claim 1 , further comprising:a video module configured to track playback timing of the video; anda moment module, for creating a data model for a moment having a start time, end time, identifier, and a state, configured to update the state of the moment based on the video playback timing, identified by the identifier and in ...

Подробнее
04-02-2016 дата публикации

2D IMAGE-BASED 3D GLASSES VIRTUAL TRY-ON SYSTEM

Номер: US20160035133A1
Принадлежит: ULSee Inc.

Method to create try-on experience wearing virtual 3D eyeglasses is provided using 2D image data of eyeglasses. Virtual 3D eyeglasses are constructed using set of 2D images for eyeglasses. Virtual 3D eyeglasses is configured onto 3D face or head model and being simulated as being fittingly worn by the wearer. Each set of 2D images for eyeglasses includes a pair of 2D lens images, a frontal frame image, and at least one side frame image. Upon detection of a movement of the face and head of wearer in real-time, the 3D face or head model and the configuration and alignment of virtual 3D eyeglasses are modified or adjusted accordingly. Features such as trimming off of portion of the glasses frame, shadow creating and environment mapping are provided to the virtual 3D eyeglasses in response to translation, scaling, and posture changes made to the head and face of the wearer in real-time. 1. A method to create real-time try-on experience of wearing virtual 3D eyeglasses by a wearer , comprising:obtaining a plurality of 2D images for a plurality of pairs of eyeglasses, the pairs of eyeglasses are organized into a group of 2D images, each pair of eyeglasses has a set of 2D images;when one designed pair of eyeglasses from the group of 2D images is selected by the wearer, constructing a pair of virtual 3D eyeglasses using the set of 2D images for the designed pair of eyeglasses;constructing a 3D face or head model of the wearer based upon one or more facial or head images of the wearer; andfitting the pair of virtual 3D eyeglasses onto the 3D face or head model of the wearer, with the pair of virtual 3D eyeglasses being simulated as being worn by the wearer;wherein each set of the 2D images for each pair of eyeglasses comprises a frontal frame image and at least one side frame image.2. The method as claimed in claim 1 , wherein the step of fitting the pair of virtual 3D eyeglasses onto the 3D face or head model of the wearer comprises:rotating the virtual 3D eyeglasses ...

Подробнее
05-02-2015 дата публикации

Method for real-time and realistic rendering of complex scenes on internet

Номер: US20150035830A1
Принадлежит: Shenyang Institute of Automation of CAS

A method for realistic and real-time rendering of complex scene in internet environment, comprising: generating sequences of scene-object-multi-resolution models, a scene configuration file, textures and material files, and a scene data list file; compressing the sequences of scene-object-multi-resolution models, the scene configuration file, the textures and material files, and the scene data list file and uploading the compressed files to a server; downloading, at a client terminal, he scene-object-multi-resolution models, the scene configuration file, the texture and material file, and the scene data list file in ascending order of resolution and rendering the scene simultaneously; dividing, in rendering the scene, a frustum in parallel into a plurality of partitions, generating a shadow map for each frustum, filtering the shadow maps to obtain an anti-aliasing shadowing effect; and the shadow map closest to a viewpoint is updated on a frame-by-frame basis and updating frequency decreases for the shadow maps distant from the viewpoint, wherein the shadow map closest to the viewpoint has the largest size, and the size of the shadow map decreases for the shadow maps distant from the viewpoint.

Подробнее
05-02-2015 дата публикации

Virtual light in augmented reality

Номер: US20150035832A1
Принадлежит: Microsoft Technology Licensing LLC

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
05-02-2015 дата публикации

Mobile display unit for showing graphic information which represents an arrangement of physical components

Номер: US20150035865A1
Принадлежит: ELOS FIXTURLASER AB

The invention relates to a mobile display unit ( 13 ) for displaying graphic information ( 16 ) representing an arrangement ( 100 ) of physical components ( 1, 2 ), wherein the display unit ( 13 ) comprises a control unit ( 21 ) adapted for transmitting information regarding at least the position of said arrangement. The invention is characterized in that the display unit ( 13 ) comprises a gyro unit ( 25 ) for registration of the orientation of a display unit ( 13 ) relative to said arrangement ( 100 ), where said gyro unit ( 25 ) is connected to said control unit ( 21 ), and where the control unit ( 21 ) is adapted to adjust said display of said graphic information ( 16 ) in dependence on the orientation of the display unit ( 13 ).

Подробнее
01-02-2018 дата публикации

COMPUTER PROGRAM PRODUCT, INFORMATION PROCESSING APPARATUS, AND DATA PROCESSING METHOD

Номер: US20180033197A1
Автор: ADACHI Mikio
Принадлежит: KABUSHIKI KAISHA TOSHIBA

A computer program product including programmed instructions that cause a computer to perform acquiring, changing, first generating, second generating, and synthesizing. The acquiring includes acquiring first point cloud data including a position on a first three-dimensional surface shape. The changing includes changing, using a three-dimensional element shape, the first three-dimensional surface shape represented by the first point cloud data to a second three-dimensional surface shape. The first generating includes generating second point cloud data including a surface position on the second three-dimensional surface shape. The second generating includes generating, from the second point cloud data, second shape data representing the second three-dimensional shape. The synthesizing includes synthesizing element shape data of the surface model or the solid model and the second shape data to generate first shape data representing the surface model or the solid model of the first three-dimensional shape. 1: A computer program product including programmed instructions embodied in and stored on a non-transitory computer readable medium , wherein the instructions , when executed by a computer , cause the computer to perform:acquiring first point cloud data including three-dimensional coordinates of a position on a first three-dimensional surface shape;changing, using a three-dimensional element shape, the first three-dimensional surface shape represented by the first point cloud data to a second three-dimensional surface shave, and first generating second point cloud data including three-dimensional coordinates of a surface position on the second three-dimensional surface shape;second generating second shape data from the second point cloud data, the second shape data representing the second three-dimensional shape by a surface model or a solid model, the surface model representing the three-dimensional surface shape by a curved surface, the solid model representing a ...

Подробнее
17-02-2022 дата публикации

METHODS AND APPARATUS FOR VENUE BASED AUGMENTED REALITY

Номер: US20220051022A1
Принадлежит:

In one general aspect, a method can include capturing first features associated with a real-world physical area as a model and associating an AR object with a fixed location within the model. The method can include capturing second features associated with a real-world location corresponding with a portion of the real-world physical area. The method can include associating the captured second features with a location in the model, corresponding with the real-world location, as an AR anchor where the AR object is associated with the AR anchor. 1. A method , comprising:capturing first features associated with a real-world physical area as a model, the model of the real-world physical area having a coordinate space;defining an augmented reality (AR) object within a coordinate space of the AR object, the AR object having a fixed location within the real-world physical area;capturing second features associated with a real-world location corresponding with a portion of the real-world physical area; andassociating the captured second features with a location in the model, corresponding with the real-world location, as an AR anchor, the AR anchor being associated with a coordinate space of the AR anchor,the coordinate space of the AR anchor being independent of the coordinate space of the model of the real-world physical area.2. The method of claim 1 , wherein the model is a 1:1 scale representation of the real-world physical area.3. The method of claim 1 , wherein the AR object has an orientation within the model.4. The method of claim 1 , wherein the captured second features of the AR anchor includes a panorama.5. The method of claim 1 , wherein the fixed location of the AR object is with respect to an origin of the coordinate space of the model of the real-world physical area.6. The method of claim 1 , wherein the location in the model of the AR anchor is fixed with respect to an origin of the coordinate space of the model of the real-world physical area.7. The method of ...

Подробнее
17-02-2022 дата публикации

Augmented reality systems and methods incorporating wearable pin badges

Номер: US20220051023A1
Автор: Caleb John Paullus
Принадлежит: Pinfinity LLC

Systems and methods disclosed in this application are directed to augmented reality for used with pin badges. Pin badges can be worn, held, or set within view of an AR device having a camera. The AR device sends images or video from its camera to a platform server that determines whether a pin badge exists in view of the camera. If a pin badge exists, it is identified and augmented reality imagery related to the pin badge is transmitted back to the AR device so that the AR device can incorporate that augmented reality imagery into a video stream from its camera as shown on its display.

Подробнее
17-02-2022 дата публикации

MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD WHICH ARE FOR MEDICAL NAVIGATION DEVICE

Номер: US20220051786A1
Принадлежит:

The present invention relates to a medical image processing apparatus and a medical image processing method for a medical navigation device, and more particularly, to an apparatus and method for processing an image provided when using the medical navigation device. To this end, the present invention provides a medical image processing apparatus for a medical navigation device, including: a position tracking unit configured to obtain position information of the medical navigation device within an object; a memory configured to store medical image data generated based on a medical image of the object; and a processor configured to set a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data, and generate partial medical image data corresponding to the ROI, and a medical image processing method using the same. 120-. (canceled)21. A medical image processing apparatus for a medical navigation device , comprising:a position tracker configured to obtain position information of the medical navigation device within an object;a memory configured to store medical image data generated based on a medical image of the object; anda processor configured to set a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data, and generate partial medical image data corresponding to the ROI,wherein the ROI is set to a three-dimensional region including a region within a preset distance from a reference plane based on a position of the medical navigation device, andwherein the reference plane is set based on at least one of a horizontal plane, a sagittal plane, and a coronal plane of the medical image data.22. The apparatus of claim 21 , wherein the preset distance in reference to each of the horizontal plane claim 21 , the sagittal plane claim 21 , and the coronal plane is determined by a user input.23. The apparatus of claim 21 , wherein the partial ...

Подробнее
04-02-2016 дата публикации

IMAGE CAPTURING APPARATUS AND CONTROL METHOD THEREFOR

Номер: US20160037046A1
Автор: Nashizawa Hiroaki
Принадлежит:

An image capturing apparatus comprises: an image capturing unit; an image capture control unit configured to control the image capturing unit to repeat image capture of frame images under different exposures; a development unit configured to apply development processing to each of the captured frame images; and a composition unit configured to generate a composite image by compositing temporally consecutive images that have been developed by the development unit, wherein the development unit generates, from one of the captured frame images, a first image and a second image that are associated with different development parameters, and the composition unit composites images generated using the same development parameter among images generated from the captured frame images that are temporally consecutive. 1. An image capturing apparatus comprising:an image capturing unit;an image capture control unit configured to control the image capturing unit to repeat image capture of frame images under different exposures;a development unit configured to apply development processing to each of the captured frame images; anda composition unit configured to generate a composite image by compositing temporally consecutive images that have been developed by the development unit, whereinthe development unit generates, from one of the captured frame images, a first image and a second image that are associated with different development parameters, andthe composition unit composites images generated using the same development parameter among images generated from the captured frame images that are temporally consecutive.2. The apparatus according to claim 1 , whereinthe development unit applies the development processing to the captured frame images that are temporally consecutive using a development parameter whose pattern changes with time.3. The apparatus according to claim 1 , wherein the development parameters are white balance coefficients used in controlling white balance of ...

Подробнее
04-02-2016 дата публикации

NETWORK PLANNING TOOL SUPPORT FOR 3D DATA

Номер: US20160037356A1
Принадлежит:

A telecommunication network planning method, system, and computer readable medium support accessing point cloud data and a corresponding image of a location. The point cloud data indicates positions of physical objects visible in the image. A network planning function may be performed. The network planning function may include modifying an outside plant asset object visible in the image, obtaining a metric of an outside plant asset object visible in the image, and adding a virtual outside plant asset to a location. The point cloud data may be associated with the image within an interface that depicts the image to facilitate visualization of the outside plant assets in the surrounding environment. 1. A network planning method , comprising:accessing an image of a location and point cloud data associated with the image, wherein the point cloud data includes a plurality of n-tuples, each n-tuple associated with a corresponding point in the image and each n-tuple indicating a three dimensional position of an object located at the point; and identifying existing outside plant assets at the location;', 'modifying an existing outside plant asset object visible in the image;', 'obtaining a metric of an outside plant asset object visible in the image; and', 'adding a virtual outside plant asset to the location., 'performing a network planning function selected from the group consisting of2. The method of claim 1 , wherein accessing the image includes:selecting a streetview icon from a two dimensional user interface of a network planning tool; andgenerating a streetview user interface including an image of the location and point cloud data associated with the location.3. The method of claim 2 , wherein the two dimensional user interface depicts a map of an area that includes the location claim 2 , the method further comprising:indicating, on the user interface, existing outside plant assets.4. The method of claim 1 , further comprising:assigning one or more attributes to one ...

Подробнее
30-01-2020 дата публикации

Method for monitoring an automation system

Номер: US20200033843A1
Принадлежит: EISENMANN SE

A method for monitoring an automation system comprising a plurality of components, comprising: rendering, by a processor, an image from a three-dimensional scene representing at least part of the automation system on the basis of position data and viewing direction data, and displaying the image on a display unit shall make it possible to monitor the automation system on a comparatively small display device while displaying all relevant information. To that end, the components are distributed over a plurality of floors of the automation system and the components are arranged in the three-dimensional scene on one of said floors, respectively, an input element associated with the floors is provided, wherein a vertical distance between two adjacent floors in the three-dimensional scene is changed depending on an input at the input element, and wherein material paths between components of the automation system are represented as lines in the scene, wherein at least one line represents a material path between said adjacent floors, and wherein when the vertical distance between said two adjacent floors in the three-dimensional scene is changed, the length of the line is also changed accordingly.

Подробнее