Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 2864. Отображено 100.
07-06-2012 дата публикации

Layer combination in a surface composition system

Номер: US20120139918A1
Автор: Alan Liu, Ashraf Michail
Принадлежит: Microsoft Corp

A system and method for processing and rendering multiple layers of a two-dimensional scene. A system provides a mechanism to determine a number of scene surfaces and a mapping between scene layers and scene surfaces. The mechanisms may include combining and aggregating areas of layers to create one opaque surface, aggregating non-overlapping semi-transparent opaque areas of layers, or creating surfaces from overlapping semi-transparent surfaces. Moving objects are accommodated, so that layers below a moving object may be rendered properly in frames where the moving object is above the layer and frames where the moving object is not above the layer, for each pixel.

Подробнее
22-11-2012 дата публикации

Graphics processing systems

Номер: US20120293545A1
Принадлежит: ARM LTD

In a tile-based graphics processing system, when an overlay image is to be rendered onto an existing image, the existing tile data for the existing image from the frame buffer in the main memory is pre-loaded into the local colour buffer of the graphics processor (step 41 ). The overlay content is then rendered and used to modify the tile data stored in the colour buffer (step 44 ). When the data for a given sampling position stored in the tile buffer is modified as a result of the overlay image, a corresponding dirty bit for the tile region that the sampling position falls within is set (step 45 ). Then, when all the rendering for the tile has been completed, the dirty bits are examined to determine which regions of the tile have been modified (step 46 ). The modified tile regions are written back to the output image in the frame buffer in the main memory (step 47 ), but any regions whose dirty bits have not been set are not written back to the frame buffer in the main memory.

Подробнее
13-12-2012 дата публикации

System, method, and computer program product for optimizing stratified sampling associated with stochastic transparency

Номер: US20120313961A1
Автор: Samuli Laine, Tero Karras
Принадлежит: Nvidia Corp

A system, method, and computer program product are provided for optimizing stratified sampling associated with stochastic transparency. In use, surface data associated with one or more surfaces to be rendered is received. Additionally, the one or more surfaces are rendered, utilizing stochastic transparency, where stratified sampling associated with the stochastic transparency is optimized.

Подробнее
18-04-2013 дата публикации

Method of rendering a user interface

Номер: US20130097520A1
Принадлежит: Research in Motion Ltd

A user interface (UI) is presented in which a UI client engine is associated with an application, and a UI rendering engine is associated with the client engine. The UI rendering engine receives the scene graph and data items associated with elements of the scene graph, and processes a rendering thread to render a UI in accordance with the scene graph and the data items, independently of further input from the client UI engine.

Подробнее
28-11-2013 дата публикации

Automatic flight control for uav based solid modeling

Номер: US20130317667A1
Автор: Ezekiel Kruglick
Принадлежит: EMPIRE TECHNOLOGY DEVELOPMENT LLC

Technologies are generally described for controlling a flight path of a UAV based image capture system for solid modeling. Upon determining an initial movement path based on an estimate of a structure to be modeled, images of the structure to be modeled may be captured and surface hypotheses formed for unobserved surfaces based on the captured images. A normal vector and a viewing cone may be computed for each hypothesized surface. A set of desired locations may be determined based on the viewing cones for the entire structure to be modeled and a least impact path for the UAV determined based on the desired locations and desired flight parameters.

Подробнее
07-01-2016 дата публикации

AUTOMATED SEAMLINE CONSTRUCTION FOR HIGH-QUALITY HIGH-RESOLUTION ORTHOMOSAICS

Номер: US20160005149A1
Автор: Malitz Seth
Принадлежит:

A system for semi-automated feature extraction comprising an image analysis server that receives and initializes a plurality of raster images, a feature extraction server that identifies and extracts image features, a mosaic server that assembles mosaics from multiple images, and a rendering engine that provides visual representations of images for review by a human user, and a method for generating a cost raster utilizing the system of the invention. 1. A system for semi-automated feature extraction , comprising:an image analysis server computer comprising program code stored in a memory and adapted to process raster images to determine image and pixel information and provide the images and derived information to other components of the system;a feature extraction server computer comprising program code stored in a memory and adapted to identify image features based at least in part on derived information for a raster image, and to provide the image and identified features to other components of the system;a mosaic imaging server computer comprising program code stored in a memory and adapted to receive a plurality of raster images and assemble the images to form mosaics and provide the mosaics to other components of the system; anda rendering engine computer comprising program code stored in a memory and adapted to receive a plurality of data from other components of the system and create visual representations of the data for review by a human user.2. The system of claim 1 , further comprising a database computer comprising program code stored in a memory and adapted to store and provide information for other components of the system.3. The system of claim 1 , further comprising a viewer device adapted to receive information from other components of the system and present the information for review by a human user.4. The system of claim 1 , further comprising a plurality of input devices adapted to receive input from a human user and provide the results of the ...

Подробнее
07-01-2021 дата публикации

SYSTEM AND METHOD FOR DISPLAYING HIGH QUALITY IMAGES ON CONTROLS BASED ON DISPLAY SCALING

Номер: US20210004931A1
Принадлежит:

A graphical user interface (GUI) includes an image list associated with a display component of a display device. The image list has an index of logical images, where each of the logical images has a fixed pixel size. The GUI further includes an image container connected to the image list, where the image container comprises a plurality of different size versions of at least some of the logical images. The GUI further includes one or more control objects, where each of the control objects is configured to draw a corresponding image from the index of logical images of the image list. The GUI is configured to update the index of logical images of the image list with the different size versions sourced from the image container in response to a scale change of the display component. 1. A graphical user interface (GUI) comprising:an image list associated with a display component of a display device, the image list having an index of logical images, each of the logical images having a fixed pixel size;an image container connected to the image list, the image container comprising a plurality of different size versions of at least some of the logical images; andone or more control objects, each of the control objects configured to draw a corresponding image from the index of logical images of the image list,wherein the GUI is configured to update the index of logical images of the image list with the different size versions sourced from the image container in response to a scale change of the display component.2. The GUI of claim 1 , wherein each of the control objects is configured to draw a corresponding different size version of the logical images in response to the update of the image list.3. The GUI of claim 1 , wherein each of the different size versions of the logical images have the same aspect ratio claim 1 , and each different size version of a corresponding logical image has a different pixel size than those of other different size versions of the corresponding ...

Подробнее
02-01-2020 дата публикации

ANALYZING 2D MOVEMENT IN COMPARISON WITH 3D AVATAR

Номер: US20200005544A1
Автор: Kim Sang J.
Принадлежит:

A processing device receive a two dimensional (2D) video recording of a subject user performing a physical activity and provides a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity. The processing device causes display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity, receives first user input to advance the 2D video recording to a first position corresponding the first key point, and receives second user input comprising a first synchronization command. In response, the processing device generates a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point. 1. A method comprising:receiving a two dimensional (2D) video recording of a subject user performing a physical activity;providing a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity;causing display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity;receiving first user input to advance the 2D video recording to a first position corresponding the first key point;receiving second user input comprising a first synchronization command; andgenerating, by a processing device, a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point.2. The method of claim 1 , wherein the 3D visualization is based on 3D motion capture data corresponding to one or more target users performing the physical activity.3. The method of claim 2 , wherein the 3D motion capture data comprises one or more of positional data claim 2 , rotational data claim 2 , or acceleration data measured by a plurality of motion capture sensors.4. The method of claim 2 , wherein the one or more target users share one or more attributes with the subject user.5. The method of claim 4 , wherein the one or more ...

Подробнее
03-01-2019 дата публикации

PRESENTING MARKUP IN A SCENE USING DEPTH FADING

Номер: US20190005665A1
Принадлежит:

Architecture that enables the drawing of markup in a scene that neither obscures the scene nor is undesirably obscured by the scene. When drawing markup such as text, lines, and other graphics, into the scene, a determination is made as to the utility to the viewer of drawing the markup with greater prominence than an occluding scene object. The utility of the markup is based on the distance of the scene object and markup from the camera. Thus, if an object that appears small in the scene and is in front of the markup, the markup will be drawn more clearly, whereas if the same object appears large in the scene and is in front of the markup, the markup is rendered faint, if drawn at all. 1. A system , comprising:a contribution component configured to compute a dominant contribution between markup of a scene and a scene object of the scene based on markup contribution amount of markup and scene contribution amount, the markup contribution amount and the scene contribution amount are computed to determine utility of the markup or the scene object to a viewer for a given view of the scene;a fade component configured to apply a level of depth fading to the markup based on the utility to the viewer to perceive the markup relative to a location of the scene object of the given view; andat least one microprocessor configured to execute computer-executable instructions in a memory associated with the contribution component and the fade component.2. The system of claim 1 , wherein the markup contribution amount and the scene contribution amount are computed on a per pixel basis and the depth fading is applied on a per pixel basis.3. The system of claim 1 , wherein the scene is a three-dimensional (3D) scene.4. The system of claim 1 , wherein the contribution component computes a ratio of distances of an obscuring pixel and a markup pixel to a virtual camera from which the scene is viewed.5. The system of claim 1 , wherein the contribution component computes an amount of scene ...

Подробнее
27-01-2022 дата публикации

NON-BLOCKING TOKEN AUTHENTICATION CACHE

Номер: US20220028160A1
Принадлежит:

Techniques are disclosed relating to a non-blocking token authentication cache. In various embodiments, a server computer system receives a request for service from a client device, with the request including an authentication token issued by an authentication service. The server computer system accesses a cache of previously received validation responses from the authentication service to determine whether one of the validation responses indicates that the authentication token has already been validated by the authentication service. In response to determining that the cache includes a validation response indicating that the authentication token has already been validated by the authentication service, the server computer system first provides a response to the request for service to the client device, and then contacts the authentication service to determine whether the authentication token is still valid. 1. A method , comprising:receiving, by a server computer system from a client device, a request for service, wherein the request includes an authentication token issued by an authentication service;accessing, at the server computer system, a cache of previously received validation responses from the authentication service to determine whether one of the validation responses indicates that the authentication token has already been validated by the authentication service; and providing, to the client device, a response to the request for service, wherein the response is provided based on the validation response and not on token validity information about the authentication token that is stored by the authentication service; and', 'contacting the authentication service to determine whether the authentication token is still valid and should be revalidated., 'in response to determining that the cache includes a validation response indicating that the authentication token has already been validated by the authentication service, the server computer system2. The method ...

Подробнее
14-01-2016 дата публикации

Depth Display Using Sonar Data

Номер: US20160011310A1
Принадлежит:

Various implementations directed to a depth display using sonar data are provided. In one implementation, a marine electronics device may include a sonar signal processor and a memory having a plurality of program instructions which, when executed by the sonar signal processor, cause the processor to receive sonar data from a transducer array disposed on a vessel, where the sonar data corresponds to a marine environment proximate to the vessel. The memory may also have program instructions which, when executed by the sonar signal processor, cause the processor to generate point cloud data based on the received sonar data. The memory may further have program instructions which, when executed by the sonar signal processor, cause the processor to generate a depth display based on the point cloud data, where the depth display includes a depth line representing an underwater floor of the marine environment. 1. A marine electronics device , comprising:a sonar signal processor; receive sonar data from a transducer array disposed on a vessel, wherein the sonar data corresponds to a marine environment proximate to the vessel;', 'generate point cloud data based on the received sonar data; and', 'generate a depth display based on the point cloud data, wherein the depth display includes a depth line representing an underwater floor of the marine environment., 'a memory comprising a plurality of program instructions which, when executed by the sonar signal processor, cause the processor to2. The marine electronics device of claim 1 , wherein the program instructions which claim 1 , when executed by the sonar signal processor claim 1 , further cause the processor to:analyze the received sonar data to determine one or more locations of one or more objects of the marine environment using interferometry; andgenerate the point cloud data based on the one or more determined locations.3. The marine electronics device of claim 2 , wherein the program instructions which claim 2 , when ...

Подробнее
14-01-2016 дата публикации

AUTOMATIC SPATIAL CALIBRATION OF CAMERA NETWORK

Номер: US20160012589A1
Принадлежит:

A method for automatic spatial calibration of a network of cameras along a road includes, processing a frame that is obtained from each camera of the network to automatically identify an image of a pattern of road markings that have a known spatial relationship to one another. The identified images are used to calculate a position of each camera relative to the pattern of road markings that is imaged by that camera. Geographical information is applied to calculate an absolute position of a field of view of each camera. A global optimization is applied to adjust the absolute position of the field of view of each camera of the camera network relative to an absolute position of the fields of view of other cameras of the camera network 1. A method for automatic spatial calibration of a network of cameras along a road , the method comprising:processing a frame that is obtained from each camera of the network to automatically identify an image of a pattern of road markings that have a known spatial relationship to one another;using the identified images to calculate a position of each camera relative to the pattern of road markings that is imaged by that camera;applying geographical information to calculate an absolute position of a field of view of each camera; andapplying a global optimization to adjust the absolute position of the field of view of each camera of the camera network relative to the absolute positions of the fields of view of other cameras of the camera network.2. The method of claim 1 , wherein identifying the markings comprises applying a Hough transform claim 1 , a scale invariant feature transform (SIFT) claim 1 , or a speeded up robust features (SURF) detection to the frame.3. The method of claim 1 , wherein the road markings comprise broken line lane separation markings.4. The method of claim 1 , wherein the pattern of road markings is two dimensional.5. The method of claim 1 , wherein using the identified images comprises using a measured position ...

Подробнее
14-01-2016 дата публикации

THREE-DIMENSIONAL MAP DISPLAY SYSTEM

Номер: US20160012632A1
Принадлежит:

A map database stores three-dimensional polygons of features, as well as water system polygons such as sea and lake and ground surface polygons. The map database stores map data in multiple levels having different levels of details, such as levels LVa to LVc. A procedure of displaying a three-dimensional map offsets the water systems relative to the ground surfaces, draws a map of a distant view area distant away from the viewpoint using map data at a low level of details, subsequently clears a depth buffer and newly draws a map of a close view area close to the viewpoint using map data at a high level of details. The offset is set to increase in the distant view area and decrease in the close view area. Increasing the offset in the distant view area avoids the occurrence of Z-fighting in the distant area from the viewpoint. 1. A three-dimensional map display system that displays a three-dimensional map , comprising:a map database that stores three-dimensional polygon data representing geography and a three-dimensional shape of each feature;an offset setting section that performs an offset process in an overlapping area of a first polygon and a second polygon representing substantially horizontal planes to shift the first polygon and the second polygon relative to each other in a height direction, so as to make a height difference between the first polygon and the second polygon; anda drawing controller that uses the three-dimensional polygon data and polygons processed by the offset setting section to draw the three-dimensional map by perspective projection viewed from a specified viewpoint position and in a specified gaze direction, whereinthe offset setting section shifts the first polygon and the second polygon to increase the height difference at a distant point from the viewpoint in the perspective projection than a close point to the viewpoint.2. The three-dimensional map display system according to claim 1 , whereinthe map database stores the three- ...

Подробнее
14-01-2016 дата публикации

THREE-DIMENSIONAL MAP DISPLAY SYSTEM

Номер: US20160012634A1
Принадлежит:

A map database stores map data in multiple levels having different levels of details. In displaying a three-dimensional map, the map data having a higher level of details is used for a close view area near the viewpoint to a predetermined distance, and the map data having a lower level of details is used for a distant view area farther from the predetermined distance. The distant view area is first drawn by a perspective projection, and then, after clearing a depth buffer that stores depth information, the close view area is drawn, such that an undesirable hidden line removal process based on the depth information is not performed between the projected image in the distant view area and that in the close view area, thereby avoiding an unnatural phenomenon in which part of the close view image is hidden by the distant view image. 1. A three-dimensional map display system that displays a three-dimensional map , comprising:a map database that stores map data for displaying a three-dimensional map, at each of multiple levels having different levels of map details; and the display controller concurrently uses map data in a plurality of different levels to draw a map, such that map data at a rougher level having a lower level of map details is used for a distant view area more distant away from the viewpoint position and map data at a finer level having a higher level of map details is used for a close view area closer to the viewpoint position,', 'the display controller sequentially draws the map from the distant view area toward the close view area, and', 'the display controller draws the close view area over a previously drawn map, irrespective of depth at each point on the previously drawn map., 'a display controller that refers to the map database and displays a three-dimensional map viewed from a specified viewpoint position and in a specified gaze direction, wherein'}2. The three-dimensional map display system according to claim 1 , whereinthe display controller ...

Подробнее
14-01-2016 дата публикации

THREE-DIMENSIONAL MAP DISPLAY SYSTEM

Номер: US20160012635A1
Принадлежит:

A three-dimensional map is displayed in a bird's eye view with a stereoscopic effect of feature polygons by providing shading in an appropriate direction according to the gaze direction in a simulative manner. Shading wall polygons are set in addition to feature polygons in three-dimensional map data. The shading wall polygon is a virtual plate-like polygon provided vertically, for example, along a boundary of a feature polygon. When provided around the water system, the shading wall polygon is specified to be opaque on one surface viewed from the water system side and to be transparent on the opposite surface. The shading wall polygons are drawn along with the feature polygons in the process of displaying a map. The shading wall polygon is drawn in black, gray or the like only at a location where the surface specified to be opaque faces a gaze direction. 1. A three-dimensional map display system that displays a three-dimensional map , comprising:a drawing map database that is used to draw the three-dimensional map; anda display controller that refers to the drawing map database and displays the three-dimensional map as a bird's eye view from a viewpoint position looking down from a height and in a gaze direction, whereinthe drawing map database stores: feature polygon data used to draw feature polygons representing shapes of features to be drawn in the three-dimensional map; and shading wall polygon data used to display a shading wall polygon, which is a virtual plate-like polygon to express shading in the three-dimensional map, is set perpendicular to or inclined to a feature polygon for which the plate-like polygon is to be set, and is specified to be visible only from one surface of front and rear surfaces and to be transparent from the other surface, andthe display controller displays only the surface specified to be visible with respect to the shading wall polygon.2. The three-dimensional map display system according to claim 1 , whereinthe shading wall ...

Подробнее
03-02-2022 дата публикации

METHOD OF GRAPHICALLY TAGGING AND RECALLING IDENTIFIED STRUCTURES UNDER VISUALIZATION FOR ROBOTIC SURGERY

Номер: US20220031406A1
Принадлежит: Asensus Surgical US, Inc.

A system and method for augmenting an endoscopic display during a medical procedure including capturing a real-time image of a working space within a body cavity during a medical procedure. A feature of interest in the image is identified to the system using a user input handle of a surgical robotic system, and a graphical tag is displayed on the image marking the feature. 1. A method of tagging regions of interest on displayed images during a medical procedure , comprising:positioning an endoscope in a body cavity;positioning a surgical instrument in the body cavity;mounting the surgical instrument to a first robotic manipulator arm;causing the first robotic manipulator arm to manipulate the surgical instrument within the body cavity in in response to user manipulation of a user input device;capturing images of a surgical site within the body cavity and displaying the images on a display, the displayed images including images of the surgical instrument, displaying a graphical pointer as an overlay on the display;', 'in response to user manipulation of the user input, positioning the graphical pointer at a region of interest on images of the surgical site displayed on the display;', 'in response to user selection input, displaying a graphical tag at the location of the graphical pointer on the images displayed on the display., 'entering a tag selection mode comprising2. The method of claim 1 , wherein the method further comprises:exiting the tag selection mode, wherein graphical tag remains displayed at the region of interest on images of the surgical site displayed on the display.3. The method of claim 1 , wherein the method further includes removing the graphical tag in response to user input to remove the tag.4. The method of claim 3 , wherein the method further includes restoring the graphical tag at the region of interest in response to user input to restore the tag.5. The method of claim 1 , wherein the user input includes a user input handle claim 1 , and ...

Подробнее
11-01-2018 дата публикации

METHOD FOR PREVENTING BURN-IN CONDITIONS ON A DISPLAY OF AN ELECTRONIC DEVICE

Номер: US20180012332A1
Принадлежит:

A method for preventing burn-in conditions on a display of an electronic device is disclosed. The electronic device acquires a position of, for example, a task bar being displayed on an OELD screen, extracts a color of a pixel located adjacent to the task bar, and generates an overlay window of a color based on the extracted color. The color of the overlay window is translucent and continuously changes from the extracted color to black with an increase of the distance from the pixel located adjacent to the task bar. The task bar is displayed on the OELD screen with the overlay window overlaying the task bar. 1. An electronic device comprising:a display for displaying images;a position acquisition unit for acquiring a position of a fixed image to be displayed on said display;a color extraction unit for extracting a color of a pixel located adjacent to said fixed image;a mask generation unit for generating a mask having a color based on said color of said pixel located adjacent to said fixed image; andan image display control unit for displaying said fixed image on said display with said mask overlaid on said fixed image.2. The electronic device of claim 1 , wherein said color of said mask is translucent.3. The electronic device of claim 1 , wherein said color of said mask continuously changes from said color of said pixel located adjacent to said fixed image to black with an increase of distance from said pixel located adjacent to said fixed image.4. The electronic device of claim 1 , wherein said color extraction unit uses an average value of colors of a plurality of adjacent pixels located adjacent to said fixed image.5. The electronic device of claim 1 , wherein said mask is overlaid on said fixed image when a cursor is not positioned on said fixed image claim 1 , and said mask is not overlaid on said fixed image when said cursor is positioned on said fixed image.6. The electronic device of claim 1 , wherein said color of said mask is updated every time when said ...

Подробнее
11-01-2018 дата публикации

Visualization of Wellbore Cleaning Performance

Номер: US20180012384A1
Принадлежит: Halliburton Energy Services, Inc.

A method for displaying performance of a wellbore drilling operation including wellbore cleaning includes defining drilling parameters for the drilling operation. The method includes defining a visualization tool including a boundary defined by the drilling parameters, where the boundary depicts an optimal rate of penetration (ROP). The method includes displaying the visualization tool with the optimal ROP, where the optimal ROP defines a maximum ROP for optimal wellbore cleaning based on the drilling parameters. The method includes displaying an actual rate of penetration (ROP) with respect to the optimal ROP on the visualization tool. The method further includes adjusting the actual ROP to match the optimal ROP. 1. A method for displaying performance of a wellbore drilling operation including wellbore cleaning , comprising:defining drilling parameters for the drilling operation;defining a visualization tool comprising a boundary defined by the drilling parameters, wherein the boundary depicts an optimal rate of penetration (ROP);displaying the visualization tool with the optimal ROP, wherein the optimal ROP defines a maximum ROP for optimal wellbore cleaning based on the drilling parameters;displaying an actual rate of penetration (ROP) with respect to the optimal ROP on the visualization tool; andadjusting the actual ROP to match the optimal ROP.2. The method of claim 1 , wherein the drilling parameters comprise at least one of a drilling fluid flow rate claim 1 , a fluid property claim 1 , and a rotational speed.3. The method of claim 1 , wherein the actual ROP defines a level of performance for actual wellbore cleaning.4. The method of claim 1 , wherein adjusting the actual ROP comprises reducing the actual ROP.5. The method of claim 1 , wherein adjusting the actual ROP comprises increasing the actual ROP.6. The method of claim 1 , wherein the visualization tool provides a graphical layout of the optimal ROP claim 1 , wherein the actual ROP is mapped onto the ...

Подробнее
09-01-2020 дата публикации

ADAPTIVE SMART GRID-CLIENT DEVICE COMPUTATION DISTRIBUTION WITH GRID GUIDE OPTIMIZATION

Номер: US20200013138A1
Принадлежит:

Systems, apparatuses and methods may provide a way to monitor, by a process monitor, one or more processing factors of one or more client devices hosting one or more user sessions. More particularly, the systems, apparatuses and methods may provide a way to generate, responsively, a scene generation plan based on one or more of a digital representation of an N dimensional space or at least one of the one or more processing factors, and generate, by a global scene generator, a global scene common to the one or more client devices based on the digital representation of the space. The systems, apparatuses and methods may further provide for performing, by a local scene generator, at least a portion of the global illumination based on one or more of the scene generation plan, or application parameters. 1. (canceled)2. A graphics apparatus comprising:a memory comprising a digital representation of an N dimensional space; and monitor one or more processing factors of one or more client devices hosting one or more user sessions;', 'generate a scene generation plan based on one or more of the digital representation of the space or one or more of the one or more processing factors; and', 'generate a global scene common to the one or more client devices based on the digital representation of the space,, 'logic coupled to the memory, wherein the logic is implemented in one or more of configurable logic or fixed-functionality hardware logic, and the logic is towherein the graphics pipeline apparatus is to distribute generation of at least a portion of the global scene to one or more of the client devices based on at least a stability threshold period that is based on a threshold during which the processing factors are to be sampled.3. The apparatus of claim 2 , wherein the digital representation of the space includes scene elements claim 2 , and wherein one or more of the scene elements are illuminated in the global scene based on one or more of the scene generation plan or ...

Подробнее
15-01-2015 дата публикации

Method for Improving Speed and Visual Fidelity of Multi-Pose 3D Renderings

Номер: US20150015581A1
Автор: Lininger Scott
Принадлежит:

A method and system provides increased visual fidelity in a multi-pose three-dimensional rendering of an object by overlaying edge lines. A server sends a multiplicity of two-dimensional renderings of the object to a client device over a network. Each of the 2D renderings depicts the object in a different pose. As the 2D renderings are displayed sequentially, the object appears to move, for example, by pivoting on an axis. The server also sends a multiplicity of overlay renderings to the client device. Each of the overlay renderings corresponds to a respective one of the 2D renderings and depicts edge lines that would appear on the 2D rendering. The edge lines are rendered on a transparent background such that, when a user interface combines one of the 2D renderings with the corresponding overlay rendering, the edge lines are highlighted on the 2D rendering and provide additional visual cues to the viewer. 166-. (canceled)67. A method of depicting on a display a multi-pose three-dimensional (3D) rendering of an object , the method comprising:storing on a computer readable medium a multiplicity of two-dimensional (2D) renderings of the object, each of the multiplicity of 2D renderings depicting the object from a different apparent viewing angle;transmitting the multiplicity of 2D renderings via a network to a client device coupled to the display; (1) either (a) a shadow layer, rendered in a first color and corresponding to shadows on the object as rendered in the corresponding 2D rendering; or (b) edge lines, rendered in a first color and corresponding to the edges of the object as rendered in the corresponding 2D rendering; and', '(2) a transparent background;, 'storing on the computer readable medium a first multiplicity of overlay renderings, each of the first multiplicity of overlay renderings corresponding to a respective one of the multiplicity of 2D renderings and each overlay rendering comprisingtransmitting, separately from the multiplicity of 2D renderings, ...

Подробнее
14-01-2016 дата публикации

MOBILE TERMINAL AND CONTROLLING METHOD THEREOF

Номер: US20160014340A1
Принадлежит: LG ELECTRONICS INC.

A mobile terminal and controlling method thereof are disclosed, by which a sharp and clear photo can be composed using a plurality of photos taken by burst shooting. The present invention includes a camera, a sensing unit configured to detect a surrounding brightness, a user input unit configured to receive a photographing command, and a controller, if the photographing command is received, taking a first number of photos by burst shooting, the controller outputting a shaking eliminated photo based on a second number of photo(s) selected from the first number of the taken photos, wherein the second number is determined based on the detected surrounding brightness. 1. A mobile terminal comprising:a camera;a sensing unit configured to detect a surrounding brightness;a user input unit configured to receive a photographing command; and when the photographing command is received, take a first number of photos by burst shooting, and', 'output a shaking eliminated photo based on a second number of photo(s) selected from the first number of the taken photos,, 'a controller configured towherein the second number is determined based on the detected surrounding brightness.2. The mobile terminal of claim 1 , apply a shaking elimination algorithm divided into a plurality of application levels, and', 'determine each of the application levels based on the detected surrounding brightness., 'wherein in obtaining the shaking eliminated photo based on the second number of the photos, the controller further configured to3. The mobile terminal of claim 1 , wherein the controller further configured to select the determined second number of the photo(s) from the first number of the taken photos based on sharpness.4. The mobile terminal of claim 3 , wherein the controller further configured to:calculates a sharpness value of each of the first number of the taken photos, andselect the determined second number of the photo(s) in higher order of the calculated sharpness value.5. The mobile ...

Подробнее
21-01-2016 дата публикации

MAINTENANCE ASSISTANCE FOR AN AIRCRAFT BY AUGMENTED REALITY

Номер: US20160019212A1
Автор: Soldani Siegfried
Принадлежит:

A method for supporting aircraft maintenance, performed in a system comprising a display selection device and a portable device with a camera and an augmented reality display. The method comprises the steps of acquiring images of an equipment of the aircraft with the camera, and sending them to the display selection device; identifying the equipment present in these images with the display selection device and determining the identifier thereof, referred to as the useful identifier; on the basis of the useful identifier, sending maintenance assistance data with the display selection device to the augmented reality display; in response, displaying, in augmented reality, images corresponding to the data with the augmented reality display device. The method also comprises steps for displaying guidance data guiding towards one equipment in particular. A device for implementing such a method is also disclosed.

Подробнее
21-01-2016 дата публикации

GRID DATA PROCESSING METHOD AND APPARATUS

Номер: US20160019436A1
Принадлежит:

The present invention discloses a grid data record processing method. The method comprising: acquiring influence parameters of lag time of an insulator on which flashover is occurred, the lag time being a time interval from the insulator flashover to tripping of a corresponding breaker in a substation is caused; determining the lag time according to the acquired influence parameters of the lag time and a lag time evaluation model; and determining trip-up records caused by the insulator flashover from grid data records according to the lag time. With the method and apparatus according to embodiments of the present invention, trip-up records caused by insulator flashover can be efficiently determined from grid data records.

Подробнее
21-01-2016 дата публикации

System and Method for Defining an Augmented Reality View in a Specific Location

Номер: US20160019722A1
Принадлежит:

This invention is a system and method for defining a location-specific augmented reality capability for use in portable devices having a camera. The system and method uses recent photographs or digital drawings of a particular location to help the user of the system or method position the portable device in a specific place. Once aligned, a digital scene is displayed to the user transposed over (and combined with) the camera view of the current, real-world environment at that location, creating an augmented reality experience for the user.

Подробнее
18-01-2018 дата публикации

Techniques for Built Environment Representations

Номер: US20180018502A1
Принадлежит:

Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment. 1. A system for indoor mapping and navigation comprises:a reference mobile device including sensors to capture range, depth and position data with the mobile device including a depth perception unit a position estimator, a heading estimator, and an inertial measurement unit to process data received by the sensors from an environment, the reference mobile device further including a processor configured to:process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit;execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping; andintegrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment.2. The system of claim 1 , wherein the 2D or 3D object recognition technique is part of the 3D mapping process.3. The system of claim 1 , wherein reference device models are images or 3D data models or Building information modelling (BIM) data.4. The system of wherein the processor is further configured to:load RGB/RGB-D (three color+one depth) image/point cloud data set of a scene;choose interest points;compute scene ...

Подробнее
19-01-2017 дата публикации

INCREASING SPATIAL RESOLUTION OF PANORAMIC VIDEO CAPTURED BY A CAMERA ARRAY

Номер: US20170019594A1
Принадлежит:

The present disclosure involves systems, software, and computer implemented methods for increasing spatial resolution of panoramic video captured by a camera array. In one example, a method may include identifying a captured image from each camera in a camera array associated with a capture of a panoramic video. The captured images are stitched together to generate at least one combined image and image mode homographies are calculated between the plurality of cameras in the camera array based on the stitching results. A plurality of captured video frames from each camera in the camera array are identified and video mode homographies of the plurality of cameras are determined based on the calculated image mode homographies. The determined video mode homographies are applied to stitch the plurality of captured video frames. 1. A computerized method performed by at least one processor , the method comprising:identifying a captured image from each camera in a camera array associated with a capture of a panoramic video, the camera array comprising a plurality of cameras, each camera operable to capture both images and video in respective image capture and video capture modes;stitching the captured images together to generate at least one combined image;calculating image mode homographies between the plurality of cameras in the camera array based on the stitching results;identifying a plurality of captured video frames from each camera in the camera array;determining video mode homographies of the plurality of cameras in the camera array based on the calculated image mode homographies; andapplying the determined video mode homographies to stitch the plurality of captured video frames.2. The method of claim 1 , wherein the image capture mode of each camera provides a relatively larger field-of-view claim 1 , higher spatial resolution claim 1 , and higher image quality than the corresponding video mode of the camera.3. The method of claim 1 , wherein the captured image and ...

Подробнее
19-01-2017 дата публикации

IMMERSIVE TELECONFERENCING WITH TRANSLUCENT VIDEO STREAM

Номер: US20170019627A1
Принадлежит:

An immersive video teleconferencing system may include a transparent display and at least one image sensor operably coupled to the transparent display. The at least one image sensor may be multiple cameras included on a rear side of the transparent display, or a depth camera operably coupled to the transparent display. Depth data may be extracted from the images collected by the at least one image sensor, and an image of a predetermined subject may be segmented from a background of the collected images based on the depth data. The image of the segmented predetermined subject may also be scaled based on the depth data. The image of the scaled segmented predetermined subject may be transmitted to a remote transparent display at a remote location, and displayed on the remote transparent display such that a background surrounding the displayed image of the remote location is visible through the transparent display, so that the predetermined subject appears to be physically located at the remote location. 1. A method , comprising:establishing a connection between a first video teleconferencing device at a first location to a second video teleconferencing device at a second location to initiate a video teleconferencing session, the second location being different from the first location;synchronizing operation of a first transparent display at the first location and at least one first image sensor at the first location, the at least one first image sensor being operably coupled to the first transparent display;capturing images at the first location using the at least one first image sensor;generating a scaled image of a subject at the first location based on the images captured at the first location by the at least one first image sensor; andtransmitting the generated scaled image of the subject at the first location to the second video teleconferencing device at the second location for display on a second transparent display of the second video teleconferencing system at ...

Подробнее
17-01-2019 дата публикации

STORAGE MEDIUM, INFORMATION PROCESSING APPARATUS AND CONTROL METHOD

Номер: US20190019334A1
Автор: MIZOGUCHI Hidemi
Принадлежит: SQUARE ENIX CO., LTD.

A medium, an information processing apparatus, and a method of controlling an information processing apparatus for rendering a screen in which objects are located in a scene are provided. A viewpoint position for rendering the scene is obtained. Whether a character object exists within a predetermined range from the obtained viewpoint position is determined. For a target character object determined to exist within the predetermined range from the viewpoint position, a transparency for the target character object overall is set. 1. A non-transitory computer-readable storage medium storing a program that causes a computer connected to a renderer for rendering a screen in which objects are located in a scene to execute:processing of obtaining a viewpoint position for rendering the scene;processing of determining whether a character object exists within a predetermined range from the obtained viewpoint position; andprocessing of setting, for a target character object determined to exist within the predetermined range from the viewpoint position by the processing that determines, a transparency for the target character object overall.2. The non-transitory computer-readable storage medium according to claim 1 , wherein the processing of determining performs the determination based on whether or not there is a collision between a collision detection volume of a character object and a spherical collision detection volume whose radius is made to be the predetermined range for the obtained viewpoint position.3. The non-transitory computer-readable storage medium according to claim 2 , wherein a collision detection volume of a character object is spherical claim 2 , andthe processing of setting sets the transparency of the target character object based on a distance between the viewpoint position and the target character object, and the radius of the collision detection volume of the target character object.4. The non-transitory computer-readable storage medium according to ...

Подробнее
21-01-2021 дата публикации

RENDERING VIRTUAL OBJECTS IN 3D ENVIRONMENTS

Номер: US20210019948A1
Принадлежит:

Systems, methods, devices, and other techniques for placing and rendering virtual objects in three-dimensional environments. The techniques include providing, by a device, a view of an environment of a first user. A first computing system associated with the first user receives an instruction to display, within the view of the environment of the first user, a virtual marker at a specified position of the environment of the first user, the specified position derived from a second user's interaction with a three-dimensional (3D) model of at least a portion of the environment of the first user. The device displays, within the view of the environment of the first user, the virtual marker at the specified position of the environment of the first user. 1providing, by a device, a view of an environment of a first user;receiving, by a first computing system associated with the first user, an instruction to display within the view of the environment of the first user a virtual marker at a specified position of the environment of the first user, the specified position derived from a second user's interaction with a three-dimensional (3D) model of at least a portion of the environment of the first user; anddisplaying, by the device and within the view of the environment of the first user, the virtual marker at the specified position of the environment of the first user.. A computer-implemented method, comprising: This application is a continuation of U.S. application Ser. No. 16/200,245, filed Nov. 26, 2018, which is a continuation of U.S. application Ser. No. 15/422,407, filed Feb. 1, 2017, now U.S. Pat. No. 10,140,773, issued Nov. 27, 2018. The complete disclosures of all of the above patent applications are hereby incorporated by reference in their entirety for all purposes.This specification generally relates to computer-based techniques for placing and rendering virtual objects in three-dimensional (3D) environments.Various computing systems have been developed that ...

Подробнее
28-01-2016 дата публикации

System and Method for Probabilistic Object Tracking Over Time

Номер: US20160026245A1
Принадлежит:

A system and method are provided for object tracking in a scene over time. The method comprises obtaining tracking data from a tracking device, the tracking data comprising information associated with at least one point of interest being tracked; obtaining position data from a scene information provider, the scene being associated with a plurality of targets, the position data corresponding to targets in the scene; applying a probabilistic graphical model to the tracking data and the target data to predict a target of interest associated with an entity being tracked; and performing at least one of: using the target of interest to determine a refined point of interest; and outputting at least one of the refined point of interest and the target of interest. 1. A method of object tracking in a scene over time , the method comprising:obtaining tracking data from a tracking device, the tracking data comprising at least one point of interest computed by the tracking device;obtaining target data from a scene information provider, the scene comprising a plurality of targets, the target data corresponding to targets in the scene and each target being represented by one or more points in the scene;applying a probabilistic graphical model to the tracking data and the target data to predict, for each point of interest being tracked, an associated target of interest; and using the associated target of interest to refine the at least one point of interest; and', 'outputting at least one of a refined point of interest and the associated target of interest., 'performing at least one of2. The method of claim 1 , further comprising utilizing the refined point of interest to enhance tracking accuracy.3. The method of claim 2 , wherein the utilizing comprises one or more of:using the refined point of interest as input to a system receiving tracked signal data; andsending the refined point of interest to the tracking device, to assist in determining a true tracked signal.4. The method ...

Подробнее
28-01-2021 дата публикации

Surgical Navigation Inside A Body

Номер: US20210022812A1
Принадлежит:

A virtual reality surgical navigation method includes the steps of preparing a multi dimension virtual model associated with an anatomy inside of patient; receiving data indicative of a surgeon's current head position, including direction of view and angle of view; rendering a first virtual three-dimensional image from the virtual model, the virtual three-dimensional image being representative of an anatomical view from a first perspective at a location inside the patient, wherein the perspective is determined by data indicative of the surgeon's current head position; communicating the first rendered virtual image to a virtual headset display; receiving data input indicative of the surgeon's head moving to a second position, wherein the head movement comprises at least one of a change in angle of view and a change in direction of view; and rendering a second virtual three-dimensional image from the virtual model, the second virtual three-dimensional image being representative of an anatomical view from a second perspective at a first location inside the patient. 1. An augmented reality surgical navigation system comprising:one or more processors;one or more computer-readable tangible storage devices;at least one sensor for detecting information about a user's position relative to a patient;at least one camera for receiving live images of anatomical features of the patient; and first program instructions for preparing a multi dimension virtual model associated with a patient;', "second program instructions for receiving tracking information indicative of a user's current view of the patient, including the user's position relative to the patient as detected by the sensor and the user's angle of view of the patient;", "third program instructions for identifying in the virtual model a virtual view based on the received tracking information, wherein the identified virtual view corresponds to the user's view of the patient;", 'fourth program instructions for rendering a ...

Подробнее
28-01-2016 дата публикации

AUGMENTED REALITY PRODUCT BROCHURE APPLICATION

Номер: US20160026724A1
Принадлежит:

A method for viewing an augmented reality product brochure for a mattress product on a computing device is provided. The method includes capturing an image corresponding to the mattress product with a camera of the computing device and retrieving the augmented reality product brochure corresponding to the image from a memory of the computing device. The method also includes displaying the augmented reality product brochure on a user interface of the computing device, wherein the augmented reality product brochure includes a representation of the mattress product and modifying the representation of the mattress product based on receiving one or more instructions from the user. 1. A method for viewing an augmented reality product brochure for a mattress product on a computing device , the method comprising:capturing an image corresponding to the mattress product with a camera of the computing device;retrieving the augmented reality product brochure corresponding to the image from a memory of the computing device;displaying the augmented reality product brochure on a user interface of the computing device, wherein the augmented reality product brochure includes a representation of the mattress product; andmodifying the representation of the mattress product based on receiving one or more instructions from the user.2. The method of claim 1 , wherein modifying the representation of the mattress product includes removing one or more layers of the mattress product.3. The method of claim 1 , wherein modifying the representation of the mattress product includes performing one or more actions that would cause physical damage to the mattress product if performed in a real world environment.4. The method of claim 1 , wherein the image corresponding to the mattress product includes an actual image of the mattress product.5. The method of claim 1 , wherein the image corresponding to the mattress product includes a piece of marketing material associated with the mattress product.6 ...

Подробнее
28-01-2016 дата публикации

IMAGE ANALYSIS METHOD

Номер: US20160027208A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

A method for analysing a point cloud, the method comprising: 1. A method for analysing a point cloud , the method comprising:receiving a point cloud, comprising a plurality of points, each point representing a spatial point in an image;arranging the points into a hierarchical search tree, with a lowest level comprising a plurality of leaf nodes, where each leaf node corresponds to a point of the point cloud, the search tree comprising a plurality of hierarchical levels with tree nodes in each of the hierarchical levels, the nodes being vertically connected to each other though the hierarchy by branches, wherein at least one moment of a property of the descendant nodes is stored in each tree node; anddetermining geometric information of the points located within a region, by identifying the highest level tree nodes where all of the descendent leaf nodes are contained within the region and selecting the leaf nodes for the points where no sub-tree is entirely contained within the region, such that the points falling within the region are represented by the smallest number of nodes and performing statistical operations on the nodes representing the points in the region, the statistical operations being determined from the moments of the properties stored within the identified tree nodes.2. A method according to claim 1 , wherein the property is at least one selected from position claim 1 , normal vector claim 1 , colour claim 1 , curvature claim 1 , intensity or transparency.3. A method according to claim 1 , wherein the geometric information is at least one selected from: number of points; mean of positions; mean of colour; mean of normal vectors; mean of intensity; covariance of positions; covariance of normal vectors; covariance of colour; variance of curvature; and variance of intensity.4. A method according to claim 1 , wherein the moments are selected from 0th order claim 1 , 1order claim 1 , 2order claim 1 , or any higher order moments.5. A method according to ...

Подробнее
28-01-2021 дата публикации

ALLOCATION OF PRIMITIVES TO PRIMITIVE BLOCKS

Номер: US20210027519A1
Принадлежит:

An application sends primitives to a graphics processing system so that an image of a 3D scene can be rendered. The primitives are placed into primitive blocks for storage and retrieval from a parameter memory. Rather than simply placing the first primitives into a primitive block until the primitive block is full and then placing further primitives into the next primitive block, multiple primitive blocks can be “open” such that a primitive block allocation module can allocate primitives to one of the open primitive blocks to thereby sort the primitives into primitive blocks according to their spatial positions. By grouping primitives together into primitive blocks in accordance with their spatial positions, the performance of a rasterization module can be improved. For example, in a tile-based rendering system this may mean that fewer primitive blocks need to be fetched by a hidden surface removal module in order to process a tile. 1. A method of processing primitives in a computer graphics processing system in which primitives are allocated to primitive blocks at a primitive block allocation module of a computer graphics processing system , which includes a data store for storing a set of primitive blocks to which primitives can be allocated , wherein a primitive block is configured to store primitive data , the method comprising: (i) comparing an indication of a spatial position of the received primitive with at least one indication of a spatial position of at least one primitive block that is stored in the data store, and', '(ii) allocating the received primitive to a primitive block based on a result of the comparison, such that the received primitive is allocated to a primitive block in accordance with its spatial position; and, 'for each of a plurality of received primitivesprocessing primitive blocks including allocated primitives in the computer graphics processing system.2. The method of claim 1 , wherein said processing primitive blocks comprises ...

Подробнее
04-02-2016 дата публикации

Method for requesting images from earth-orbiting satellites

Номер: US20160034743A1
Автор: David Douglas Squires
Принадлежит: Individual

A method for consumer-direct requesting of images from an Earth-orbiting satellite system having online Earth and Space maps with image location selection tools. The consumer may select image from map, specify a desired image time or use a computer algorithm to match Earth or space coordinates. The consumer may place an order, send an order to computer database, and generate Earth or space pointing coordinates for satellite(s) selected to record the image. The system may automatically select a ground station in order to command, record, or transmit selected image from satellite. The system may store and/or retrieve selected image into computer database as well as deliver selected image to consumer.

Подробнее
05-02-2015 дата публикации

METHOD FOR IMPROVING THE VISUAL QUALITY OF AN IMAGE COVERED BY A SEMI-TRANSPARENT FUNCTIONAL SURFACE

Номер: US20150035849A1
Автор: Gilbert Joël
Принадлежит:

This invention describes a method for improving the visual rendering of an image when said image is placed behind a semi-transparent functional surface. The method consists of modifying certain features of the original image, in particular the brightness, contrast, gamma and colour saturation thereof, in such a way that the visual rendering of the image modified in this way and placed behind the semi-transparent functional surface is closer to the rendering of the original image when said image is seen alone, without the semi-transparent functional surface. 1. A method for improving the visual rendition of an original image intended to be placed behind a semi-transparent functional surface , this method being characterized by a processing of the original image , which consists in modifying at least one of the visual features of said original image so that the visual rendition of the processed image thus modified , when it is placed behind said semi-transparent functional surface , substantially approaches the visual rendition of the original image when the latter is observed directly without the semi-transparent functional surface.2. The method as claimed in claim 1 , wherein it includes steps consisting in:displaying an original image not covered by a functional surface and analyzing a set of visual features of said original image;covering the original image with a semi-transparent functional surface and displaying the original image thus covered to form an image to be processed;modifying display control parameters of the image to be processed and comparing said visual features of the original image and of the image to be processed;when the visual features of the original image and of the image to be processed are substantially equal, saving the values of the display control parameters and displaying the processed image with these parameter values.3. The method as claimed in wherein the original image is a printed image or an electronic image composed of backlit ...

Подробнее
01-02-2018 дата публикации

GRAPHICS PROCESSING SYSTEMS

Номер: US20180033191A1
Принадлежит: ARM LIMITED

In a graphics processing system, a bounding volume () representative of the volume of all or part of a scene to be rendered is defined. Then, when rendering an at least partially transparent object () that is within the bounding volume () in the scene, a rendering pass for part or all of the object () is performed in which the object () is rendered as if it were an opaque object. In the rendering pass, for at least one sampling position () on a surface of the object (), the colour to be used to represent the part of the refracted scene that will be visible through the object () at the sampling position () is determined by using a view vector () from a viewpoint position () for the scene to determine a refracted view vector () for the sampling position (), determining the position on the bounding volume () intersected by the refracted view vector (), using the intersection position () to determine a vector () to be used to sample a graphics texture that represents the colour of the surface of the bounding volume () in the scene, and using the determined vector () to sample the graphics texture to determine a colour for the sampling position () to be used to represent the part of the refracted scene that will be visible through the object () at the sampling position () and any other relevant information encoded in one or more channels of the texture. 1. A method of operating a graphics processing system when rendering a scene for output , in which a bounding volume representative of the volume of all or part of the scene to be rendered is defined; the method comprising:when rendering an at least partially transparent object that is within the bounding volume in the scene:performing a rendering pass for some or all of the object in which the object is rendered as if it were an opaque object; andin the rendering pass: using a view vector from a viewpoint position for the scene to determine a refracted view vector for the sampling position;', 'determining the position on ...

Подробнее
04-02-2016 дата публикации

Deinterleaving interleaved high dynamic range image by using yuv interpolation

Номер: US20160037044A1
Принадлежит: Nvidia Corp

Systems and methods for generating high dynamic images from interleaved Bayer array data with high spatial resolution and reduced sampling artifacts. Bayer array data are demosaiced into components of the YUV color space before deinterleaving. The Y component and the UV components can be derived from the Bayer array data through demosiac convolution processes. A respective convolution is performed between a convolution kernel and a set of adjacent pixels of the Bayer array that are in the same color channel. A convolution kernel is selected based the mosaic pattern of the Bayer array and the color channels of the set of adjacent pixels. The Y data and UV data are deinterleaved and interpolated into frames of short exposure and long exposures in the second color space. The short exposure and long exposure frames are then blended and converted back to a RGB frame representing a high dynamic range image.

Подробнее
04-02-2016 дата публикации

METHODS AND SYSTEMS OF SIMULATING TIME OF DAY AND ENVIRONMENT OF REMOTE LOCATIONS

Номер: US20160037134A1
Автор: Martin Scott, MEDEMA Todd
Принадлежит:

In one embodiment, a method includes the step of capturing, at a remote location with a set of digital cameras, a series of digital photographs of a landscape, wherein the series of digital photographs is taken over at least a twenty-four hour period, and wherein the capturing is managed by a remote computing device that comprises a computer process, a memory and computer networking system. The method includes the step of positioning the set of digital cameras to match an angle of a computer-display screen. The method includes the step of obtaining a location data of the set of the digital cameras. The method includes the step of communicating, with the computer networking system of the remote computing device to a server computing system, wherein the server computing system comprises at least one computer processor that includes processes that manage the display of all or a portion of the series of digital photographs on one or more display systems. 1. A system for simulating time of day and environment of remote locations , comprising:at least one computer processor disposed in a server; and obtaining, with a digital camera, series of digital photographs a landscape in a specified location and a specified date, wherein the series of digital photographs is obtained for a specified period of time;', 'positioning two or more digital cameras to match a plurality of angles of a set of screens on which the digital photographs are assigned to be displayed; and', "digital photographs can be displayed on one or more computer screens using the sun position matching algorithm to align the pictured sun with the sun's position at the current time and location."], 'logic executable by the at least one computer processor, the logic configured to implement a method, the method comprising2. The system of claim 1 , wherein the series of digital photographs is of a real landscape or a virtual landscape.3. The system of claim 2 , wherein weather information can be used to select the ...

Подробнее
31-01-2019 дата публикации

SYSTEMS AND METHODS TO ALTER PRESENTATION OF VIRTUAL RENDITION BASED ON REAL WORLD OBJECT

Номер: US20190035124A1
Принадлежит:

In one aspect, a device includes at least one processor and storage accessible to the at least one processor. The storage bears instructions executable by the at least one processor to present virtual objects of a virtual rendition on a display accessible to the processor and alter presentation of the virtual rendition based on the existence of a real-world object identified by the device.

Подробнее
31-01-2019 дата публикации

Image Processing Method, Image Processing Device and Display Panel

Номер: US20190035321A1
Автор: XU Yuanjie
Принадлежит:

An image processing method for a transparent display screen, an image processing device and a display panel are provided. The image processing method comprising: collecting colors of image content displayed on the transparent display screen and colors of background transmitted through the transparent display screen; determining whether the collected colors of the background and the collected colors of the content are similar colors; and modifying the colors of the background or the colors of the content, if the collected colors of the background and the colors of the content are similar colors.

Подробнее
11-02-2016 дата публикации

Systems, Methods, and Apparatuses for Measuring Deformation of a Surface

Номер: US20160040984A1
Автор: Byrne Richard Baxter
Принадлежит:

The present invention regards a method for measuring displacement of a surface at a region of interest when the region of interest is exposed to a load. The method includes the steps of (1) evenly illuminating the surface; (2) by means of a camera capturing a first set of images comprising a first image of the surface, applying a load to the surface at the region of interest, and capturing a second image of the surface; and (3) transmitting the first and second image to a processing module of a computer, wherein the processing module: (a) includes data relating to the image capture, such as the spatial position and field of view of the camera relative to the surface when the images were captured; (b) generates a global perspective transform from selected regions out of the displacement area (c) performs a global image registration between the two images using perspective transform to align the images; (d) computes vertical pixel shift and horizontal pixel shift between the first image and the second image for the region of interest; and (e) computes displacement of the region of interest between the images, in length units. The images are captured by the camera at an image camera position relative to the surface. In some embodiments two cameras are used, each capturing a single image from the same image camera position; in some embodiments multiple sets of images are captured by multiple cameras, from different perspectives. 1. A method for measuring displacement of a surface at a region of interest when the region of interest is exposed to a load , the method comprising the steps of:a. evenly illuminating the surface;b. by means of a camera capturing a first set of images comprising a first image of the surface, applying a load to the surface at the region of interest, and capturing a second image of the surface; i. comprises data relating to the image capture, including the spatial position and field of view of the camera relative to the surface when the images ...

Подробнее
11-02-2016 дата публикации

METHOD AND SYSTEM FOR FACILITATING EVALUATION OF VISUAL APPEAL OF TWO OR MORE OBJECTS

Номер: US20160042233A1
Принадлежит: ProSent Mobile Corporation

Disclosed herein is a computer implemented method of facilitating evaluation of visual appeal of a combination of two or more objects. The method may include presenting a user-interface to enable a user to perform a first identification of one or more first objects and a second identification of one or more second objects. Further, the method may include retrieving one or more first images of the one or more first objects based on the first identification. Additionally, the method may include retrieving one or more second images of the one or more second objects based on the second identification. Furthermore, the method may include creating a combination image based on each of the one or more first images and the one or more second images. The combination image may represent a virtual combination of each of the one or more first objects and the one or more second objects. 1. A computer implemented method of facilitating evaluation of visual appeal of a combination of at least two objects , the method comprising:a. presenting a user-interface to enable a user to perform a first identification of at least one first object associated with at least one object source of a plurality of object sources;b. retrieving at least one first image of the at least one first object based on the first identification;c. presenting a user-interface to enable the user to perform a second identification of at least one second object associated with at least one object source;d. retrieving at least one second image of the at least one second object based on the second identification; ande. creating a combination image based on each of the at least one first image and the at least one second image, wherein the combination image represents a virtual combination of each of the at least one first object and the at least one second object.2. The computer implemented method of claim 1 , wherein the plurality of object sources comprises at least one online store claim 1 , wherein a first object ...

Подробнее
11-02-2016 дата публикации

METHOD AND APPARATUS FOR ENVIRONMENTAL PROFILE GENERATION

Номер: US20160042520A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method for generating an environmental profile is provided. The method for generating environmental profile includes generating an image of an environment by capturing the environment with at least one recording device, detecting a change of an object in the environment based on the image, and generating an environmental profile based on the change of the object. 1. A method for generating an environmental profile , the method comprising:generating an image of an environment by capturing the environment with at least one recording device;detecting a change of an object in the environment based on the image; andgenerating an environmental profile of the environment based on the change of the object.2. The method of claim 1 , wherein the at least one recording device comprises at least one of a RGB camera claim 1 , a thermal camera claim 1 , a depth camera claim 1 , and a point could camera.3. The method of claim 1 , further comprising:generating a recommendation for a user related to the environment based on the environmental profile.4. The method of claim 1 , wherein the generating comprises:generating the environmental profile based on a time when the change of the object is detected and a type of the change of the object.5. The method of claim 1 , further comprising:analyzing an audio signal of the environment using an audio sensor,wherein the detecting comprises detecting the change of the object based on the audio signal.6. The method of claim 1 , wherein the generating comprises generating the environmental profile based on relevance between the image and a user.7. The method of claim 1 , wherein the change of the object comprises at least one of addition claim 1 , deletion claim 1 , replacement claim 1 , modification claim 1 , change of location with respect to the object.8. The method of claim 1 , further comprising:outputting at least one of recommendation, notification and warning for a user based on the change of the object.9. The method of claim 1 , ...

Подробнее
06-02-2020 дата публикации

Method, system and apparatus for surface rendering using medical imaging data

Номер: US20200038118A1
Принадлежит: Synaptive Medical Barbados Inc

A method, system and apparatus for surface rendering using medical imaging data is provided. A display device is controlled to render a first model of imaging data showing depth positions corresponding to a given surface threshold value, and further controlled to replace the first model with a second model of the imaging data showing respective depth positions corresponding to the given surface threshold value, the second model being faster to compute than the first model. The given surface threshold value is changed to an updated surface threshold value, for example using a slider input. The display device updates rendering of the second model to show updated respective depth positions corresponding to the updated surface threshold value. When an acceptance is received, the display device is controlled to replace the second model with the first model showing updated depth positions corresponding to the updated surface threshold value.

Подробнее
04-02-2021 дата публикации

Virtual Camera for 3-D Modeling Applications

Номер: US20210037175A1
Принадлежит:

A user interface to a virtual camera for a 3-D rendering application provides various features. A rendering engine can continuously refine the image being displayed through the virtual camera, and the user interface can contain an element for indicating capture of the image as currently displayed, which causes saving of the currently displayed image. Autofocus (AF) and autoexposure (AE) reticles can allow selection of objects in a 3-D scene, from which an image will be rendered, for each of AE and AF. A focal distance can be determined by identifying a 3-D object visible at a pixel overlapped by the AF reticle, and a current viewpoint. The AF reticle can be hidden in response to a depth of field selector being set to infinite depth of field. The AF and AE reticles can be linked and unlinked, allowing different 3-D objects for each of AF and AE. 1. A process for interfacing with an imaging device , comprising: a reticle presented on the display area, and', 'a slider element comprising a first end, a second end, and a position indicator;, 'presenting an interface comprising a display area for displaying an image, the interface comprising'}associating the first end of the slider element with infinite depth of field, and the second end of the slider element with shallow depth of field; re-rendering the reticle to have a different prominence,', 'setting one or more of an exposure level and a focal plane for rendering the image based on a position of reticle, and', 'executing rendering processes to produce the image according to the exposure level and focal plane., 'accepting inputs through the user interface, indicating movement of the position indicator of the slider element, and responsively'}2. The process for interfacing with an imaging device of claim 1 , further comprising accepting inputs through the user interface claim 1 , indicating movement of the position indicator of the slider element from the first end towards the second end claim 1 , and responsively re- ...

Подробнее
24-02-2022 дата публикации

BILLBOARD LAYERS IN OBJECT-SPACE RENDERING

Номер: US20220058860A1
Принадлежит:

The present disclosure relates to methods and apparatus for graphics processing. The apparatus may configure a plurality of billboards associated with a viewpoint of a first frame of a plurality of frames, the plurality of billboards being configured in one or more layers at least partially around the viewpoint, the configuration of the plurality of billboards being based on one or more volumetric elements between at least one of the plurality of billboards and the viewpoint. The apparatus may also render an image associated with each of the one or more volumetric elements between at least one billboard of the plurality of billboards and the viewpoint, the rendered image including a set of pixels. The apparatus may also store data in the at least one billboard based on the rendered image associated with each of the one or more volumetric elements, the data corresponding to the set of pixels. 1. An apparatus for graphics processing , comprising:a memory; and configure a plurality of billboards associated with a viewpoint of a first frame of a plurality of frames, the plurality of billboards being configured in one or more layers at least partially around the viewpoint, the configuration of the plurality of billboards being based on one or more volumetric elements between at least one of the plurality of billboards and the viewpoint;', 'render an image associated with each of the one or more volumetric elements between at least one billboard of the plurality of billboards and the viewpoint, the rendered image including a set of pixels; and', 'store data in the at least one billboard of the plurality of billboards based on the rendered image associated with each of the one or more volumetric elements, the data corresponding to the set of pixels., 'at least one processor coupled to the memory and configured to2. The apparatus of claim 1 , wherein the at least one processor is further configured to:map the data to the at least one billboard of the plurality of billboards ...

Подробнее
24-02-2022 дата публикации

IMAGE RECORDING DEVICE FOR A VEHICLE AND IMAGE RECORDING METHOD FOR A VEHICLE

Номер: US20220060654A1
Автор: Shirai Ryo
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

An image recording device for a vehicle includes: an image acquisition section configured to acquire a surroundings image in which vehicle surroundings have been captured; an image display section configured to display an overlaid image in a display region inside a vehicle cabin in a case in which a predetermined assisted driving condition has been satisfied, the overlaid image being configured by an assistance image overlaid on the surroundings image acquired by the image acquisition section; and an image recording section configured to record the overlaid image at a recording section during an assisted driving state in which the overlaid image is being displayed in the display region, and, in a case in which the assisted driving state is not in effect, record at the recording section an image that has been subjected to image processing including processing to render a state in which the assistance image is not visible. 1. An image recording device for a vehicle , comprising:a processor configured to:acquire a surroundings image in which vehicle surroundings have been captured;display an overlaid image in a display region inside a vehicle cabin in a case in which a predetermined assisted driving condition has been satisfied, the overlaid image being configured by an assistance image overlaid on the acquired surroundings image; andrecord the overlaid image at a recording section during an assisted driving state in which the overlaid image is being displayed in the display region, and, in a case in which the assisted driving state is not in effect, record at the recording section an image that has been subjected to image processing including processing to render a state in which the assistance image is not visible.2. The image recording device for a vehicle of claim 1 , wherein claim 1 , in a case in which the assisted driving state is not in effect claim 1 , the processor is configured to record at the recording section only the surroundings image claim 1 , with the ...

Подробнее
18-02-2016 дата публикации

System and method for generating an interior design

Номер: US20160048497A1
Автор: Brinda Goswami
Принадлежит: Individual

A processor implemented method for generating one or more interior designs for a space using an interior design generation system is provided. The method includes following steps: (i) a user specification is obtained from a user; (ii) the user specification is represented in a markup language to obtain a markup user specification; (iii) the markup user specification is parsed to obtain a markup space characteristic and a markup user characteristic; (iv) the (a) markup space characteristic and (b) markup user characteristic are compared with an interior design that is stored in a database to obtain a list of relevant combination of the relevant interior design components; (vii) the list of relevant combination of the relevant interior design components is arranged based on a set of rules to obtain the interior design representation in the markup language; and (viii) an interior design representation is rendered by a browser on a device.

Подробнее
18-02-2016 дата публикации

METHOD FOR PREVENTING SELECTED PIXELS IN A BACKGROUND IMAGE FROM SHOWING THROUGH CORRESPONDING PIXELS IN A TRANSPARENCY LAYER

Номер: US20160048991A1
Автор: Vlahos Paul E.
Принадлежит:

The present invention converts an image into a transparency, or “foreground image layer”, on which the readability of text and other detail is preserved after compositing with a background, while maintaining color information of broad areas of the image. In an embodiment, a matte is determined for the background image to reduce transparencies in the foreground layer, so as to prevent irrelevant parts of the background image from showing through. This is in distinction to only using the original foreground image data (prior to its transformation to a layer) to compute a matte (or mask, or alpha channel) to form a foreground layer. 1. A method for preventing pixels surrounding a presenter in a background image from showing through corresponding pixels in a transparency layer including a presentation material image to be composited over the background image comprising:a) generating a pixel transparency map using a first version of the background image with the presenter in the background image and a second version of the background image with the presenter absent from the background image;b) generating an edge map using the presentation material image;c) applying an inverse monotonic function to the edge map to form a second transparency map;d) subtracting the first transparency map from the second transparency map to form a third transparency map;e) generating a presentation layer by attaching the presentation image material to the third transparency map;f) compositing the presentation layer over the background image with the presenter present to form a composite image, showing both the presenter and the presentation material image.2. The method defined by wherein said generating said pixel transparency map is performed by calculating a difference between corresponding pixels in said first version and said second version.3. The method defined by wherein said generating said pixel transparency map is performed by generating a matte using said first version and said ...

Подробнее
18-02-2016 дата публикации

Method, System, and Computer-Readable Data Storage Device for Creating and Displaying Three-Dimensional Features on an Electronic Map Display

Номер: US20160049002A1
Автор: Lynch James D.
Принадлежит:

Methods, systems, and computer-readable data storage devices for generating and/or displaying a map with three-dimensional (3D) features are disclosed. For example, a method may comprise (i) defining a plurality of major three-dimensional regions (“major 3DRs”) and associating each major 3DR with a respective geographical area defined for a map stored in a computer-readable map database, and (ii) displaying, via a display device, one or more of the major 3DRs upon the map. Each major 3DR comprises a top, a bottom, and multiple sides. Each top, bottom, and side of each major 3DR comprises at least one surface. At least one surface of each major 3DR being displayed is textured with an image captured via an imaging device. The image textured onto each surface comprises an image captured by the imaging device when capturing images in a direction of that surface. 1defining a plurality of major three-dimensional regions (major 3DRs) and associating each major 3DR with a respective geographical area defined for a map stored in a computer-readable map database, wherein each major 3DR comprises a top, a bottom, and multiple sides, and wherein each top, bottom, and side of each major 3DR comprises at least one surface; anddisplaying, via a display device, one or more of the major 3DRs upon the map, wherein at least one surface of each major 3DR being displayed is textured with an image captured via an imaging device, and wherein the image textured onto each surface comprises an image captured by the imaging device when capturing images in a direction of that surface.. A method for displaying a map with three-dimensional features, the method comprising: The present invention relates generally to maps, navigation, and/or data thereof, and more particularly, relates to creating and/or displaying three-dimensional features on an electronic map display.In the past, many people relied on paper maps when navigating to their destinations. However, those same people and many others ...

Подробнее
16-02-2017 дата публикации

METHOD AND SYSTEM FOR PERSONALIZING IMAGES RENDERED IN SCENES FOR PERSONALIZED CUSTOMER EXPERIENCE

Номер: US20170046864A1
Принадлежит: Cimpress Schweiz GmbH

Systems and methods are described for generating and using a flexible scene framework to render dynamically-generated content within contextual scenes to personalize a customer's web experience. 1. A system for generating a personalized scene , comprising:computer readable storage media which stores an electronic document implementing a personalized product design, a scene image comprising a placeholder element, and a scene description, the scene description comprising computer-readable scene rendering instructions, which when executed by a processing unit, implement warping and com positing functionality to specify how an injectable scene element is to be warped and com posited with a scene to dynamically generate a composite image;a processing unit which processes the scene rendering instructions of the scene description using the scene image as the scene the electronic document as the injectable scene element in the scene description, the processing comprising performing the warping and com positing functionality specifying how the injectable scene element is warped and com posited with the scene to dynamically generate a personalized composite scene image.2. The system of claim 1 , the processing unit embedding the personalized composite scene image into a browser-renderable document.3. The system of claim 2 , comprising:a physical interface which sends the browser-renderable document to a browser which renders the browser-renderable document on an electronic display.4. The system of claim 1 , wherein the scene rendering instructions of the scene description specify warping and compositing the injectable scene element into the placeholder element of the scene image that is processed as the scene.5. The system of claim 1 , wherein the scene description comprises:a warping specification which defines one or more geometric transformations that change the geometry of an image, anda compositing specification which specifies how to layer the scene and the injectable ...

Подробнее
16-02-2017 дата публикации

VIRTUAL AREA GENERATION AND MANIPULATION

Номер: US20170046882A1
Принадлежит:

Techniques for virtual area generation and manipulation are described herein. The described techniques may be used, for example, for virtual areas in electronically presented content items, such as video games and other media items. In some examples, one or more interfaces may be provided that allow content developers to provide and specify a set of rules associated with the virtual area. The set of rules may include, for example, terrain rules, object rules, and other rules associated with other aspects of the virtual area. The terrain rules may include rules for generating, distributing, and/or manipulating different types of terrain, such as such as flat and/or buildable space, mountains, valleys, berms, rivers, lakes, oceans, deserts, forests, and many others. The object rules may include rules for generating, distributing, and/or manipulating different types of objects, such as trees, bushes, rocks, snow, grass, fish, birds, animals, people, vehicles, buildings, and others. 1. A computing system for generating a virtual area for an electronically presented content item comprising:one or more processors; receiving a plurality of rules associated with the virtual area, the plurality of rules comprising one or more terrain rules and one or more object rules;', 'applying the one or more terrain rules to generate terrain data associated with the virtual area;', 'receiving first information associated with at least one of time, season, weather, object navigation, or user input;', 'applying the one or more object rules to generate first object data associated with the virtual area, wherein the one or more object rules are applied based, at least in part, on the terrain data and the first information;', 'providing the first object data for performing a first rendering of at least part of the virtual area in association with the first object data;', 'receiving second information associated with at least one change to at least one of time, season, weather, object ...

Подробнее
03-03-2022 дата публикации

View generation using one or more neural networks

Номер: US20220067982A1
Принадлежит: Nvidia Corp

Apparatuses, systems, and techniques are presented to generate image or video content representing at least one point of view. In at least one embodiment, one or more neural networks are used to generate one or more images of one or more objects from a first point of view based at least in part upon one or more images of the one or more objects from a second point of view.

Подробнее
26-02-2015 дата публикации

AUGMENTED REALITY SYSTEM FOR IDENTIFYING FORCE CAPABILITY AND OCCLUDED TERRAIN

Номер: US20150054826A1
Автор: Varga Kenneth
Принадлежит: REAL TIME COMPANIES

An occlusion or unknown space volume confidence determination and planning system using databases, position, and shared real-time data to determine unknown regions allowing planning and coordination of pathways through space to minimize risk is disclosed. Data from a plurality of cameras, or other sensor devices can be shared and routed between units of the system. Hidden surface determination, also known as hidden surface removal (HSR), occlusion culling (OC) or visible surface determination (VSD), can be achieved by identifying obstructions from multiple sensor measurements and incorporating relative position with depth between sensors to identify occlusion structures. Weapons ranges, and orientations are sensed, calculated, shared, and can be displayed in real-time. Data confidence levels can be highlighted from time, and frequency of data. The real-time data can be displayed stereographically for and highlighted on a display. 1. A method for identifying an unknown object in a space comprising:receiving, by one or more computing devices, a plurality of data feeds from a plurality of plurality of sensors, the plurality of data feeds capturing data corresponding to at least one object obstructed from view in a first three-dimensional stereographic space displayed at an interface;selecting, by the one or more computing devices, respective data feeds from the plurality of data feeds; andgenerating, by the one or more computing devices, a second three-dimensional stereographic space for display at the interface, wherein the second three-dimensional stereographic space includes a rendering of the at least one object, portions of the rendering based on the respective data feeds.2. The method of claim 1 , wherein the three-dimensional stereographic space corresponds to a real-world environment claim 1 , the method further comprising:determining an orientation of the interface displaying the first three-dimensional stereographic space, the orientation defining a point-of- ...

Подробнее
22-02-2018 дата публикации

System and method for procedurally generated object distribution in regions of a three-dimensional virtual environment

Номер: US20180053345A1
Автор: Lincan Zou, LIU Ren
Принадлежит: ROBERT BOSCH GMBH

A system and method of procedural generation of graphics includes generating a bounding polygon corresponding to a size and shape of a region within a three-dimensional virtual environment that includes the plurality of objects, aligning, with the processor the bounding polygon with a two-dimensional arrangement of tiles that include predetermined locations corresponding to the objects, identifying object locations within the bounding polygon based on the data corresponding to the predetermined plurality of locations within the tiles, each object location corresponding to one predetermined location in one tile in the plurality of tiles that lies within the bounding polygon, and generating, with the processor and a display device, a graphical depiction of the three-dimensional virtual environment including graphical depictions of the plurality of objects positioned in the plurality of object locations within the bounding polygon in the region.

Подробнее
23-02-2017 дата публикации

Method and apparatus for augmented-reality rendering on mirror display based on motion of augmented-reality target

Номер: US20170053456A1

Method and apparatus for augmented-reality rendering on a mirror display based on motion of an augmented-reality target. The apparatus includes an image acquisition unit for acquiring a sensor image corresponding to at least one of a user and an augmented-reality target, a user viewpoint perception unit for acquiring coordinates of eyes of the user using the sensor image, an augmented-reality target recognition unit for recognizing an augmented-reality target, to which augmented reality is to be applied, a motion analysis unit for calculating a speed of motion corresponding to the augmented-reality target based on multiple frames, and a rendering unit for performing rendering by adjusting a transparency of virtual content to be applied to the augmented-reality target according to the speed of motion and by determining a position where the virtual content is to be rendered, based on the coordinates of the eyes.

Подробнее
13-02-2020 дата публикации

GENERATING A NEW FRAME USING RENDERED CONTENT AND NON-RENDERED CONTENT FROM A PREVIOUS PERSPECTIVE

Номер: US20200051322A1
Принадлежит: Magic Leap, Inc.

Disclosed is an approach for constructing a new frame using rendered content and non-rendered content from a previous perspective. Points of visible surfaces of a first set of objects from a first perspective are rendered. Both rendered content and non-rendered content from the first perspective are stored. The new frame from the second perspective is generated using the rendered content and the non-rendered content from the first perspective. 1. A method for constructing a new frame using rendered content and non-rendered content from a first perspective , the method comprising:rendering visible surfaces of a first set of objects from the first perspective;storing both the rendered content and the non-rendered content from the first perspective, the rendered content corresponding to the visible surfaces of the first set of objects from the first perspective, and the non-rendered content corresponding to non-visible portions of the first set of objects from the first perspective; andgenerating a new frame to be displayed, wherein the new frame is generated from a second perspective using the rendered content and the non-rendered content from the first perspective.2. The method of claim 1 , further comprising identifying the first perspective by capturing a first pose of a user.3. The method of claim 1 , wherein storing both the rendered content and the non-rendered content from the first perspective comprises storing both the rendered content and the non-rendered content in at least one of linked lists claim 1 , array structures claim 1 , true volumetric representations claim 1 , voxels claim 1 , surface definitions claim 1 , N-dimensional data structures claim 1 , and N-dimensional graph representations.4. The method of claim 1 , further comprising determining different granularities for both the rendered content and the non-rendered content for the one or more objects from the first perspective.5. The method of claim 1 , further comprising:rendering visible ...

Подробнее
25-02-2016 дата публикации

ELECTRONIC DEVICE, ELECTRONIC DEVICE SYSTEM, AND DEVICE CONTROL METHOD

Номер: US20160057395A1
Принадлежит:

An electronic device including: a living thing state estimator that determines whether or not at least an animal other than a human is present in a space based on information on the space in which the electronic device is disposed, and estimates a state of the animal that is determined to be present; and a control detail determiner that determines a control detail for the electronic device, according to a result of the determination or the estimated state of the animal. 1. An electronic device comprising:a living thing state estimator that determines whether or not at least an animal other than a human is present in a space in which the electronic device is disposed, based on information on the space, and estimates a state of the animal that is determined to be present, the information being detected by a detector; anda control detail determiner that determines a control detail for the electronic device, according to a result of the determination or the estimated state of the animal.2. The electronic device according to claim 1 , further comprising:an outputter that performs a predetermined operation,wherein as long as the state of the animal is estimated to be an awake state by the living thing state estimator or when a change in the state of the animal from a sleeping state to the awake state is estimated by the living thing state estimator,the control detail determiner causes the outputter to perform an operation, as the predetermined operation, which is indicated by a control detail corresponding to the awake state.3. The electronic device according to claim 2 ,wherein as long as the state of the animal is estimated to be a sleeping state by the living thing state estimator,the control detail determiner causes the outputter to perform an operation, as the predetermined operation, which is indicated by a control detail corresponding to the sleeping state,orwhen a change in the state of the animal from the awake state to the sleeping state is estimated by the ...

Подробнее
05-03-2015 дата публикации

HEAD MOUNTED DISPLAY, METHOD OF CONTROLLING HEAD MOUNTED DISPLAY, COMPUTER PROGRAM, IMAGE DISPLAY SYSTEM, AND INFORMATION PROCESSING APPARATUS

Номер: US20150062164A1
Принадлежит:

A head mounted display which allows a user to visually recognize a virtual image and external scenery, includes a generation unit that generates a list image including a first image which is a display image of an external apparatus connected to the head mounted display and a second image of the head mounted display, and an image display unit that forms the virtual image indicating the generated list image. 1. A head mounted display which allows a user to visually recognize a virtual image and external scenery , comprising:a generation unit that generates a list image including a first image which is a display image of an external apparatus connected to the head mounted display and a second image of the head mounted display; andan image display unit that forms the virtual image indicating the generated list image.2. The head mounted display according to claim 1 , further comprising:an acquisition unit that acquires the first image from the external apparatus,wherein the generation unit generates the list image in which the acquired first image is disposed in a first region, and the second image is disposed in a second region different from the first region.3. The head mounted display according to claim 1 ,wherein the generation unit uses an image which is currently displayed on the head mounted display as the second image.4. The head mounted display according to claim 1 ,wherein the generation unit generates the second image by changing an arrangement of icon images of the head mounted display.5. The head mounted display according to claim 4 ,wherein the generation unit further performs at least one of change of shapes, change of transmittance, change of colors, change of sizes, and addition of decorations, on the icon image when the second image is generated.6. The head mounted display according to claim 1 ,wherein the generation unit further changes a size of at least one of the first image and the second image, and generates the list image by using the changed ...

Подробнее
05-03-2015 дата публикации

Method of Presenting Data in a Graphical Overlay

Номер: US20150062174A1
Принадлежит:

A method for displaying data in a graphical overlay includes overlaying a first lens, including a first dataset, on a first region of the graphical overlay while a second lens, including a second dataset, overlaps the first lens. The first and second datasets are simultaneously displayed, and a correlation between the datasets is determined. Separation of the first and second lenses preserves a circumscribed region within each lens. The first dataset corresponds to first orientation position of the first lens. Rotation of the first lens to a second orientation position reconfigures the first dataset to provide a modified first dataset which corresponds to the second position. A rotational position may also be associated with points in time, a first position corresponding to a first point in time and the first dataset representing the first dataset at the first point in time. 1. A computer implemented method of presenting data in a graphical overlay , the method comprising:displaying the graphical overlay on a spatial rendering of data;overlaying a first lens on a first region of the graphical overlay, wherein the first lens includes at least one first context that is accessible by manipulation of the first lens;activating the at least one first context within the first lens;responsive to activating the at least one first context within the first lens, obtaining a first dataset corresponding to the first context from at least one of a plurality of data sources; anddisplaying the first dataset within the first lens, wherein the displayed first dataset is a visual representation of a first context dataset obtained from the at least one of the plurality of data sources.2. The method of claim 1 , further comprising:activating at least one second context within the graphical overlay;in response to activating the at least one second context, obtaining a second dataset corresponding to the second context from the at least one of the plurality of data sources; ...

Подробнее
03-03-2016 дата публикации

DEVICE FOR INSPECTING SHAPE OF ROAD TRAVEL SURFACE

Номер: US20160060824A1
Принадлежит:

The present invention is capable of inspecting with high accuracy the shape of a road travel surface when travelling at a low speed, and even when acceleration, deceleration, or stoppages occur frequently, and generates a highly reproducible road surface longitudinal profile. A photograph is taken along the longitudinal direction of a travel path by a photography means in a light section method via a travel surface photography means (). Corrected image information, in which a tilt in photographic image information has been corrected using inclination information, is generated on the basis of the photographic image information, the inclination information, and movement information via a road surface profile generation means (), and thereafter the corrected image information is arranged using the movement information. Vertical motion information pertaining to the travel surface photography means is specified from image contents of overlapped regions. One portion of the corrected image information is cut out, and extracted image information is generated. While the height of the corrected image is corrected using the vertical motion information from the corrected image information, the extracted image information is arranged sequentially, and connected, and the road surface profile is generated. 1. A device installed in a vehicle , for photographing road travel surface while the vehicle travels and inspecting a shape of the road travel surface based on photographic information obtained by the photographing , comprisingtravel surface illumination means for emitting a light beam to the road travel surface along a travel surface photography axis set parallel to a travel direction of the vehicle,travel surface photography means installed in the vehicle at a predetermined reference angle for acquiring information necessary for a light section method by sequentially photographing from an oblique direction, with a predetermined photography range set as a unit, the travel ...

Подробнее
10-03-2022 дата публикации

Image processing device, image processing method, and program

Номер: US20220076499A1
Принадлежит: Sony Corp

There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data.

Подробнее
01-03-2018 дата публикации

MICRO DOPPLER PRESENTATIONS IN HEAD WORN COMPUTING

Номер: US20180059421A1
Принадлежит:

Methods involve receiving a location indicative of a human presence at a location in an obscured environment, establishing a virtual location marker in a position that is visually available to a person wearing a head-worn computing system, wherein the virtual marker provides an indication of the location in the obscured environment, and generating a graphical indicia as a representation of the human presence and presenting the graphical indicia in a field of view of the head-worn computing system, wherein the presentation of the graphical indicia is positioned in the field of view by visually associating the graphical indicia with the location marker such that the graphical indicia appears to remain at the location marker location, as perceived by a person wearing the head-worn computing system, independent of the head-worn computing system's viewing angle. 1. A method , comprising:receiving a location indicative of a human presence at a location in an obscured environment;establishing a virtual location marker in a position that is visually available to a person wearing a head-worn computing system, wherein the virtual marker provides an indication of the location in the obscured environment; andgenerating a graphical indicia as a representation of the human presence and presenting the graphical indicia in a field of view of the head-worn computing system, wherein the presentation of the graphical indicia is positioned in the field of view by visually associating the graphical indicia with the location marker such that the graphical indicia appears to remain at the location marker location, as perceived by a person wearing the head-worn computing system, independent of the head-worn computing system's viewing angle.2. The method of claim 1 , further comprising: establishing an identity of the human presence and presenting an indication of the identity in the field of view.3. The method of claim 2 , wherein the indication is a color of the graphical indicia.4. The ...

Подробнее
04-03-2021 дата публикации

SYSTEM AND METHOD FOR RENDERING AN OBJECT

Номер: US20210063186A1
Автор: GUO Zhirui

A method for rendering an object is provided. The method may include obtaining tile information associated with a region of interest (ROI) from a database. The method may include extracting, from the tile information, one or more links along a center line of an overpass in the ROI. The method may include determining at least one intersection of the one or more links. The method may include performing a topology analysis on the one or more links and the at least one intersection to generate a link chain of the one or more links. The method may include constructing a model of the overpass based on the link chain of the one or more links. The method may further include rendering the model of the overpass. 1. A system , comprising:a storage device including a set of instructions for displaying a rendered overpass on a 2D digital map; and obtain tile information associated with a region of interest (ROI) from a database;', 'extract, from the tile information, one or more links along a center line of an overpass in the ROI;', 'determine at least one intersection of the one or more links;', 'perform a topology analysis on the one or more links and the at least one intersection to generate a link chain of the one or more links;', 'determine a sequence of fusion based on the link chain;', 'construct a model of the overpass by fusing one or more sub-models according to the sequence of fusion, wherein the one or more sub-models correspond to a link type of the one or more links that have a same group-ID; and', 'render the model of the overpass., 'at least one processor in communication with the storage device, wherein when executing the set of instructions, the at least one processor is configured to cause the system to2. (canceled)3. The system of claim 1 , wherein the one or more sub-models comprise a flat board model and at least one side wall model claim 1 , and to construct the flat board model and the at least one side wall model corresponding to the one or more links ...

Подробнее
03-03-2016 дата публикации

System, Method, and Apparatus for Flood Risk Analysis

Номер: US20160063635A1
Принадлежит:

A method, system, apparatus and computer readable storage medium is provided for taking information related to a specific real property structure, including customer information; information regarding uses of real property (residential, industrial, commercial, living spaces, crawl spaces, etc.); elevation, slope, and grade information; base flood elevation data; flood depth; use of flood mitigation devices (e.g., breakaway walls, flood vents); survey information; the use of design professionals in mitigation of risk; and other matters; and using this information to identify means of reducing risk, including use of, or repositioning of, flood mitigation devices; reinforcement of structures, use of design professionals, and other matters, in order to assess and mitigate risk. 1. A computer-implemented method of presenting a visual representation of flood risk analysis for a structure , the method comprising:obtaining, by an electronics device, a set of indicators to determine structural characteristics of a real property structure;choosing, by an electronics interface, a photographic image depicting at least one structure at a location; movement of property at the at least one structure;', 'use of flood vents;', 'use of flood vents compliant with technical requirements of National Flood Insurance Program and relevant technical bulletins;', 'use of breakaway walls;', 'displacement of machine or devices within the at least one structure;', 'gradient changes;', 'barriers;', 'bringing in fill to raise elevation of all or part of a structure;', 'evaluating historical flood plain maps to identify flood insurance requirements based on the date a structure was built; and', 'evaluating the lowest adjacent grade against a base flood elevation for filing letters of map amendments., 'modifying, by the electronics device, a photographic image to reflect flood damage risks to the at least one structure commensurate with at least one of the following methods of reducing costs of ...

Подробнее
03-03-2016 дата публикации

LINE PARAMETRIC OBJECT ESTIMATION

Номер: US20160063716A1
Принадлежит:

A method may include projecting, onto a first projection plane of a first projection volume, first points from a point cloud of a setting that are within the first projection volume. Further, the method may include matching a plurality of the projected first points with a cross-section template that corresponds to a line parametric object (LPO) of the setting to determine a plurality of first element points of a first primary projected element. Additionally, the method may include projecting, onto a second projection plane of a second projection volume, second points from the point cloud that are within the second projection volume and matching a plurality of the projected second points with the cross-section template to determine a plurality of second element points of a second primary projected element. Moreover, the method may include generating a parameter function based on the first element points and the second element points. 1. A method comprising:receiving a point cloud of three-dimensional (3D) data corresponding to a setting, wherein the 3D data includes a plurality of line parametric object (LPO) points that correspond to an LPO included in the setting;generating a first projection volume that corresponds to a three-dimensional extension of a first projection plane, wherein the first projection plane includes a first position along a dominant axis about which the LPO extends, the first projection plane is perpendicular to a direction axis at the first position, and the direction axis at least roughly corresponds to the dominant axis;projecting, onto the first projection plane, first points from the point cloud that are within the first projection volume;matching a plurality of the projected first points with a cross-section template that corresponds to the LPO to determine a plurality of first element points of a first primary projected element, wherein the first element points include a first plurality of LPO points;generating a second projection volume ...

Подробнее
04-03-2021 дата публикации

CONTENT GENERATION N A VISUAL ENHANCEMENT DEVICE

Номер: US20210065408A1
Автор: Chen Yiming, ZHU Haichao
Принадлежит:

Aspects for content generation in a virtual reality (VR), an augmented reality (AR), or a mixed reality (MR) system (collectively “visual enhancement device”) are described herein. As an example, the aspects may include an image sensor configured to collect color information of an object and a color distance calculator configured to respectively calculate one or more color distances between a first color of a first area of the object and one or more second colors. The aspects may further include a color selector configured to select one of the one or more second colors based on a pre-determined color distance and a content generator configured to generate content based on the selected second color. 1. A visual enhancement device , comprising:an image sensor configured to collect color information of an object;a color distance calculator configured to respectively calculate one or more color distances between a first color of a first area of the object and one or more second colors;a color selector configured to select one of the one or more second colors based on a pre-determined color distance; anda content generator configured to generate content based on the selected second color.2. The visual enhancement device of claim 1 , further comprising a content rendering unit configured to determine a position of the generated content.3. The visual enhancement device of claim 2 , wherein the content rendering unit is further configured to superimpose the generated content on the first area of the object from the perspective of a user of the visual enhancement device.4. The visual enhancement device of claim 2 , wherein the content rendering unit is further configured to place the generated content such that at least a portion of the generated content overlaps with the first area of the object from a perspective of a user of the visual enhancement device.5. The visual enhancement device of claim 1 , wherein the color distance calculator is further configured to average ...

Подробнее
05-03-2015 дата публикации

ZOOMABLE PAGES FOR CONTINUOUS DIGITAL WRITING

Номер: US20150067489A1
Принадлежит:

Facilitating digital writing on a tablet screen includes providing a viewing screen corresponding to the tablet screen, superimposing on the viewing screen an authoring screen for a user to provide digital writing, the authoring screen being a magnified portion of the viewing screen, showing a mapping portion on the viewing screen, where the mapping portion corresponds to the portion of the viewing screen being magnified into the authoring screen, and projecting on to the viewing screen writing entered by the user in the authoring screen. The authoring screen is semi-transparent and writing and the mapping portion on the viewing screen are viewable through the authoring screen. The authoring screen may be superimposed over an entirety of the viewing screen. Writing provided on the viewing screen may be presented as faded strokes. A user may pan the authoring screen to facilitate entering writing into different parts of the viewing screen. 1. A method of facilitating digital writing on a tablet screen , comprising:providing a viewing screen corresponding to the tablet screen;superimposing on the viewing screen an authoring screen for a user to provide digital writing, the authoring screen being a magnified portion of the viewing screen;showing a mapping portion on the viewing screen, wherein the mapping portion corresponds to the portion of the viewing screen being magnified into the authoring screen; andprojecting on to the viewing screen writing entered by the user in the authoring screen, wherein the authoring screen is semi-transparent and writing and the mapping portion on the viewing screen are viewable through the authoring screen.2. A method claim 1 , according to claim 1 , wherein the authoring screen is superimposed over an entirety of the viewing screen.3. A method claim 1 , according to claim 1 , wherein writing provided on the viewing screen is presented as faded strokes.4. A method claim 1 , according to claim 1 , wherein a user pans the authoring ...

Подробнее
29-05-2014 дата публикации

Image processing apparatus, image processing method, and storage medium

Номер: US20140146083A1
Автор: Hiroichi Yamaguchi
Принадлежит: Canon Inc

An image processing apparatus, which determines, for a combined image obtained by combining pixels of a given first image and pixels of an unknown second image either translucently or non-translucently using an unknown coefficient indicating a transparency, whether each of pixels included in the combined image is a translucently combined pixel, is provided. The image processing apparatus calculates, from pixel values of the combined image and the first image of respective pixels in a predetermined area including one pixel, pixel values of an image corresponding to the second image, calculates a total of differences between the calculated pixel values, identifies a coefficient used to obtain the combined image from the total of the difference, and determines that the one pixel is a translucently combined pixel when a value of the identified coefficient is larger than a predetermined value.

Подробнее
08-03-2018 дата публикации

Heads-up display windshield

Номер: US20180067307A1
Принадлежит: DURA OPERATING LLC

A motor vehicle heads-up display system includes an organic light emitting diode (OLED) screen positioned in contact with a motor vehicle windshield. The OLED screen includes a screen portion used to display augmented reality projection data. Multiple transparent display portions individually display vehicle related data and infotainment related data including at least a first transparent display portion, a second transparent display portion, and a third transparent display portion. A transparency level of each of the multiple transparent display portions can be varied. An instrument cluster display is presented on the first transparent display. A camera presentation of a motor vehicle left-hand side view is presented on the second transparent display. A camera presentation of a motor vehicle right-hand side view is presented on the third transparent display. One of the multiple transparent display portions is positioned in direct line-of-sight view of a passenger of the motor vehicle.

Подробнее
10-03-2016 дата публикации

METHODS AND SYSTEMS FOR COMPUTING AN ALPHA CHANNEL VALUE

Номер: US20160071240A1
Автор: Liu Yu
Принадлежит:

Methods and systems for computing an alpha channel value are provided. In one embodiment, a set of parameters is obtained based on an effect (transformation) to be applied to a source image. A value is also obtained that defines a uniform width of an area that borders at least one boundary of the transformed source image in the target image. For a target image pixel coordinate in the area, a corresponding source pixel coordinate is computed that is within another non-uniform area bordering the source image. An alpha channel value defining semi-transparency of a pixel associated with the target image pixel coordinate is computed as a function of a location of the corresponding source pixel coordinate in the another area bordering the source image. 1. A method performed by a computational device as part of transforming a source digital image into a digital object in a target digital image , the source digital image and the target digital image each comprising a plurality of pixels; the method comprising:receiving and storing the source digital image in memory;receiving at a user interface an indication of an effect to be applied to said source digital image;obtaining a set of parameters based on the indication of the effect;obtaining a value defining a width of an area that borders at least one boundary of the digital object in the target digital image; the width of the area uniform along the at least one boundary of the digital object;{'sub': t', 't', 't', 't', 's', 's, 'for a pixel coordinate (x,y) of the target digital image that is within the area that borders the at least one boundary of the digital object, computing, using the pixel coordinate (x,y) of the target digital image and at least some of the parameters, a corresponding source pixel coordinate (x,y) that is within another area bordering the source digital image, said another area having a width that is non-uniform along a boundary of the source digital image;'}{'sub': t', 't', 's', 's, 'computing an ...

Подробнее
28-02-2019 дата публикации

SPECULAR HIGHLIGHTS ON PHOTOS OF OBJECTS

Номер: US20190066350A1
Принадлежит:

Systems and methods are presented for recording and viewing images of objects with specular highlights. In some embodiments, a computer-implemented method may include accessing a first plurality of images, each of the images in the first plurality of images including an object recorded from a first position, and a reflection of light on the object from a light source located at a different location than in each of the other images in the first plurality of images. The method may also include generating a first composite image of the object, the first composite image comprising a superposition of the first plurality of images, and wherein each of the images in the first plurality of images is configured to change in a degree of transparency within the first composite image and in accordance with a first input based on a degree of tilt. 1. A computing device comprising a display screen , the computing device being configured to display on the screen a first composite image of an object based on a first plurality of images , each of the images in the first plurality of images comprising the object where the object is illuminated such that a reflection of light on the object is different in each of the images in the first plurality of images , the first composite image comprising a superposition of the first plurality of images , where each of the images in the first plurality of images is configured to change in a degree of transparency within the first composite image based on a user input.2. The computing device of wherein display of the first composite image comprises adjusting the degree of transparency for each of the first plurality of images in response to the user input to present an interactive perspective of light reflections from the object.3. The computing device of wherein the degree of transparency for each of the first plurality of images is associated with a corresponding value of the user input.4. The computing device of claim 1 , wherein the change in ...

Подробнее
28-02-2019 дата публикации

METHODS FOR DYNAMIC IMAGE COLOR REMAPPING USING ALPHA BLENDING

Номер: US20190066367A1
Автор: ANTUNEZ Emilio
Принадлежит:

Systems, methods, and computer-readable storage media can be used to perform alpha-projection. One method may include receiving an image from a system storing one or more images. The method may further include alpha-projecting the received image to assign alpha channel values to the received image by projecting one or more pixels of the received image from an original color to a second color and setting alpha channel values for the one or more pixels by determining the alpha channel value that causes each second color alpha blended with a projection origin color to be the original color. The method may further include displaying the alpha-projected image as a foreground image over a background image. 1. A method comprising:receiving an image from a system storing one or more images; projecting one or more pixels of the received image from an original color to a second color; and', 'setting alpha channel values for the one or more pixels by determining the alpha channel value that causes each second color alpha blended with a projection origin color to be the original color; and, 'alpha-projecting the received image to assign alpha channel values to the received image bydisplaying the alpha-projected image as a foreground image over a background image.2. The method of claim 1 , wherein projecting the one or more pixels from the original color to the second color is performed based on a projection solid; andwherein the projection solid is a set of connected red, green, and blue (RGB) points wherein a straight line segment can be drawn between any point within the projection solid to any point within a background color set, wherein the line segment comprises only one or more of the set of connected RGB points, wherein the background color set comprises the projection origin color and is a subset of the connected RGB points of the projection solid, wherein the set of connected RGB points of the projection solid are a superset comprising the background color set.3. The ...

Подробнее
27-02-2020 дата публикации

Computational blur for varifocal displays

Номер: US20200065941A1
Принадлежит:

Methods are disclosed herein to blur an image to be displayed on a stereo display (such as virtual or augmented reality displays) based on the focus and convergence of the user. The methods approximate the complex effect of chromatic aberration on focus, utilizing three (R/G/B) simple Gaussian blurs. For transparency the methods utilize buffers for levels of blur rather than depth. The methods enable real-time chromatic-based blurring effects for VR/AR displays. 1. A system comprising:a display to render a plurality of virtual objects; andcomputational blurring logic to compute a red focal distance and a blue focal distance to each virtual object based on a green focal distance to the virtual object, and to blur the virtual object on the display based on a blur radius for each of the red focal distance, blue focal distance, and green focal distance.2. The system of wherein the computational blurring logic utilizes a Monte Carlo ray tracing algorithm that sets a ray casting direction utilizing a point along a line through both of a center of a thin lens model and a convergence point for green rays from the virtual object.3. The system of wherein the Monte Carlo ray tracing algorithm further utilizes a point behind the thin lens model at the red focal distance to set the ray casting direction.4. The system of wherein the thin lens model comprises a lens diameter A and a viewpoint z=0 claim 2 , and for a green focal distance z=d claim 2 , a diameter C of the blur radius in world-space for a virtual object at distance z is:{'br': None, 'i': 'C=A|z−d|/d.'}5. The system of wherein the computational blurring logic is adapted to:scan N levels of a scene comprising the virtual objects, each of the N levels associated with a degree of blurriness to apply to virtual objects in the level, wherein the degree varies from level to level;apply the Monte Carlo ray tracing algorithm for each virtual object at each of the N levels;compute a blur radius for each virtual object at each ...

Подробнее
27-02-2020 дата публикации

METHOD AND APPARATUS FOR REALIZING COLOR TWEEN ANIMATION

Номер: US20200066005A1
Автор: ZHANG Nana
Принадлежит: ALIBABA GROUP HOLDING LIMITED

A start fill scheme in a first layer that is initially non-transparent is displayed on a display of a computing device. An end fill scheme in a different second layer that overlaps the first layer and is initially at least partially transparent is displayed on the display. A first transparency value of the first layer is gradually changed to a value corresponding to transparency. A second transparency of the second layer is gradually changed to a value corresponding to non-transparency. Both the first transparency value and the second transparency value are gradually changed during a particular time period. Gradually changing the first transparency value and the second transparency value includes changing the values by a plurality of increments over the particular time period. 1. A computer-implemented method , comprising:displaying, on a display of a computing device, a start fill scheme in a first layer that is initially non-transparent;displaying, on the display, an end fill scheme in a different second layer that overlaps the first layer and is initially at least partially transparent; andgradually changing a first transparency value of the first layer to a value corresponding to transparency, and gradually changing a second transparency of the second layer to a value corresponding to non-transparency, wherein both the first transparency value and the second transparency value are gradually changed during a particular time period, and wherein gradually changing the first transparency value and the second transparency value includes changing the values by a plurality of increments over the particular time period.2. The computer-implemented method of claim 1 , wherein gradually changing the first and second transparency values includes displaying a color tween animation comprises:changing, on the display area, from a first fill scheme to a second fill scheme, including sequentially changing to an Nth fill scheme, wherein N is an integer greater than 1;setting the ...

Подробнее
07-03-2019 дата публикации

TECHNIQUES FOR BUILT ENVIRONMENT REPRESENTATIONS

Номер: US20190073518A1
Принадлежит: Tyco Fire & Security GmbH

Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment. 1. A system for indoor mapping and navigation comprises: process captured data to generate a mapping, the mapping indicating one or more objects in a built environment;', 'recognize the one or more objects in the built environment;', 'identify, from the one or more recognized objects, types of installed devices of interest in a part of the mapping; and', 'integrate the mapping of the identified installed devices of interest into the built environment by combining point cloud data indicating location of the identified installed devices of interest with image data of the built environment., 'one or more computer-readable non-transitory storage media having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to2. The system of claim 1 , further comprising a mobile device including sensors to capture range claim 1 , depth claim 1 , and position data with the mobile device including a depth perception unit claim 1 , a position estimator claim 1 , a heading estimator claim 1 , and an inertial measurement unit (IMU) to process data received by the sensors from the environment.3. The system of claim 1 , wherein the instructions cause the one or more processors to recognize the one or more objects in the built environment ...

Подробнее
17-03-2016 дата публикации

METHOD AND APPARATUS TO CREATE AND CONSUME A WORKFLOW AS A VISUAL CHECKLIST

Номер: US20160078642A1
Автор: Nigg Alexander
Принадлежит:

Disclosed are method and apparatus either of which enables a user who wants to communicate instructions for performing certain tasks (possibly in a specific way) to other people, to do so easily, precisely, and in detail. The technique enables the receiving party to easily consume, i.e., understand, follow, and communicate completion of, these instructions, while being able to communicate with the person who generated the instructions. As well, the technique generates an audit trail for the person who created the instructions. The person communicating the instructions for performing certain tasks can automatically track the completion of the tasks, as well as view analytics related to the tasks, their completion, and the actions of the person(s) performing them. 1. A method comprising:receiving, at an originating device, a user input indicative of a workflow the workflow including a set of corresponding tasks to be completed;receiving, at the originating device, a user input indicative of a background image, the background image depicting a context of an area where a task of the corresponding tasks is to be performed or an object to which a corresponding task of the set of corresponding tasks is to be performed;displaying, on a display of the originating device, the background image and a set of user input images, each user input image corresponding to a task in the set of corresponding tasks; andreceiving, at the originating device, a user input indicative of a placement a user input image from the set of user input images onto a user-determined location of the background image to indicate that the task corresponding to the user input image is to be performed at the area corresponding to the user-determined location of the background image;wherein each user input image is capable of being changed from one state to another state to indicate the status of the corresponding task.2. The method of claim 1 , wherein the originating device is a stationary or mobile ...

Подробнее
07-03-2019 дата публикации

TIME-DEPENDENT CLIENT INACTIVITY INDICIA IN A MULTI-USER ANIMATION ENVIRONMENT

Номер: US20190073816A1
Автор: Shuster Brian Mark
Принадлежит:

A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time. 1. A method for multi-user animation comprising:modeling a three-dimensional space within a computer memory;locating an avatar within the three-dimensional space, the avatar being responsive to input commands from a client to act within the modeled three-dimensional space;monitoring the input commands to determine a period of time the client is inactive, wherein the period of time the client is inactive is determined using cessation of any user input commands regardless of whether the commands cause movement of the avatar; anddisplaying a badge indicating the activity status of the avatar.2. The method of claim 1 , further comprising displaying the avatar engaging without transparency.3. The method of claim 1 , wherein the at least part of an activity period is the entire period taking place between the last client interaction with the avatar and updating of the badge.4. The method of claim 1 , wherein the badge is removed after a period of inactivity.5. The method of claim 1 , further comprising measuring inactivity of the client using cessation of interaction with the avatar as a starting time.6. The method of claim 1 , further comprising allowing a user of the client to resume interaction with the three- ...

Подробнее
17-03-2016 дата публикации

3D MODELED VISUALISATION OF A PATIENT INTERFACE DEVICE FITTED TO A PATIENT'S FACE

Номер: US20160078687A1
Принадлежит:

An electronic apparatus () includes a display generation unit () configured to generate a display area () in a user interface (--), the display area being configured to display a 3-D model of a patient's face and a 3-D model of a patient interface device fitted to the 3-D model of the patient's face; and a transparency adjustment unit () configured to generate a transparency adjustment tool () in the user interface, the transparency adjustment tool being operable to adjust the transparency of a subset of components of the 3-D model of the patient interface device displayed in the 3-D display area of the user interface. 1. An electronic apparatus comprising:a display generation unit configured to generate a display area in a user interface, the display area being configured to display a 3-D model of a patient's face and a 3-D model of a patient interface device fitted to the 3-D model of the patient's face; anda transparency adjustment unit configured to generate a transparency adjustment tool in the user interface, the transparency adjustment tool being operable to adjust the transparency of a subset of components of the 3-D model of the patient interface device displayed in the 3-D display area of the user interface.2. The electronic apparatus of claim 1 , wherein the subset of components of the 3-D model of the patient interface device is a subset of components of the 3-D model of the patient interface device that do not contact the 3-D model of the patient's face.3. The electronic apparatus of claim 1 , wherein the components of the 3-D model of the patient interface device include a cushion claim 1 , and the subset of components of the 3-D model of the patient interface device are components of the 3-D model of the patient interface device other than the cushion.4. The electronic apparatus of claim 1 , wherein the components of the 3-D model of the patient interface device include a cushion and a forehead cushion claim 1 , and the subset of components of the 3-D ...

Подробнее
15-03-2018 дата публикации

VISUAL EFFECT AUGMENTATION OF PHOTOGRAPHIC IMAGES

Номер: US20180075575A1
Принадлежит:

A method, system, and/or computer program product augment and display a photographic image based on a context of a subject of the photographic image. One or more processors receive a photographic image that was captured by a camera. The processor(s) determine a context of the photographic image, where the context is captured by a context sensor at a location of a subject whose image is captured in the photographic image, and where the context describes a state of the subject whose image is captured in the photographic image. The processor(s) augment the photographic image with an additional feature to create an augmented photographic image based on the context captured by the context sensor. The processor(s) then display the augmented photographic image on a viewing device. 1. A method comprising:receiving, by one or more processors, a photographic image that was captured by a camera;determining, by one or more processors, a context of the photographic image, wherein the context is captured by a context sensor at a location of a subject whose image is captured in the photographic image, and wherein the context describes a state of the subject whose image is captured in the photographic image;augmenting, by one or more processors, the photographic image with an additional feature to create an augmented photographic image based on the context captured by the context sensor; anddisplaying, by one or more processors, the augmented photographic image on a viewing device.2. The method of claim 1 , wherein the context is a physiological state of a person whose image is captured in the photographic image claim 1 , wherein the physiological state of the person is detected by a biometric sensor associated with the person claim 1 , and wherein said augmenting the photographic image comprises:adding, by one or more processors, a sound track to the photographic image, wherein the sound track evokes the physiological state of the person captured in the photographic image.3. The ...

Подробнее
16-03-2017 дата публикации

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Номер: US20170076476A1
Принадлежит:

A storage () of an image processing device () stores a background image () and a first image () and a second image () that are foreground images with respect to the background image (), and a first basic mask image (the first basic mask image () formed from pixels having a pixel value of ‘1’ indicating opacity and pixels having a pixel value of ‘0’ indicating transparence. An acquirer () acquires each of these images. An image generator () generates a first intermediate image by compositing the first image () onto the background image () only in pixels that are in the same positions as the pixels in the first basic mask image () having the pixel value of ‘1’. The image generator () then generates a first composite image by overwriting the second image () onto the first intermediate image only in the pixels that are in the same positions as the pixels in the first basic mask image () having the pixel value of ‘1’. 1. An image processing device , comprising:a storage configured to store a background image, a first image and a second image that are foreground images with respect to the background image and have predetermined positions relative to the background image, and a first mask image formed from pixels having a first pixel value indicating opacity and pixels having a zero pixel value indicating transparence;an acquirer configured to generate the background image, the first image, the second image, and the first mask image from the storage; andan image generator configured to generate a first composite image in which the second image acquired by the acquirer is composited, onto the background image acquired by the acquirer, in the pixels that are in the same positions as the pixels in the first mask image having the first pixel value, and the first image acquired by the acquirer is composited, onto the background image acquired by the acquirer, where the second image is not composited, in the pixels that are in the same positions as the pixels in the first mask ...

Подробнее
16-03-2017 дата публикации

METHOD FOR PREVENTING SELECTED PIXELS IN A BACKGROUND IMAGE FROM SHOWING THROUGH CORRESPONDING PIXELS IN A TRANSPARENCY LAYER

Номер: US20170076483A1
Автор: Vlahos Paul E.
Принадлежит:

The present invention converts an image into a transparency, or “foreground image layer”, on which the readability of text and other detail is preserved after compositing with a background, while maintaining color information of broad areas of the image. In an embodiment, a matte is determined for the background image to reduce transparencies in the foreground layer, so as to prevent irrelevant parts of the background image from showing through. This is in distinction to only using the original foreground image data (prior to its transformation to a layer) to compute a matte (or mask, or alpha channel) to form a foreground layer. 1a) determining a matte for a presenter's image, forming a presenter's layer,b) determining a matte for a presentation material image, forming a presentation layer,c) compositing the presenter's layer over the presentation material image,d) compositing the presentation layer over the composited presenter's layer and presentation material image.. A method for reducing of transparency in the foreground layer comprising: In long distance learning, presentation material is displayed on a screen or board, and the presenter stands to the side or at times in front of the board or screen, talking about the subject and pointing to illustrations or text on the screen that would enhance the effectiveness of his presentation. In order to encode and transmit a visual of his lecture to a distant location, the presentation material is sent electronically, preferably from its original digital source if available, rather than a likely degraded “second generation” image from a camera capturing the original presentation screen.However, it is also desired that the presenter be shown as well, so as to make the material more relatable on a human level, through his gestures, social cues, and real time reaction to the material, and to the audience, bringing the lecture to life. At the remote distant location, the presenter is shown along with the presentation ...

Подробнее
16-03-2017 дата публикации

OPERATION SUPPORT METHOD, OPERATION SUPPORT PROGRAM, AND OPERATION SUPPORT SYSTEM

Номер: US20170076491A1
Принадлежит: FUJITSU LIMITED

An operation support method is disclosed. A three dimensional panorama image is generated by overlapping multiple images with each other based on posture information of a camera and a feature point map of the multiple images captured by the camera. The three dimensional panorama image is displayed at a first display device. At a second display device, position information of a target indicated is output based on current posture information of the camera in response to an indication of the target on the three dimensional panorama image. 1. An operation support method , comprising:generating a three dimensional panorama image by overlapping multiple images with each other based on posture information of a camera and a feature point map of the multiple images captured by the camera, and displaying the three dimensional panorama image at a first display device; andoutputting, at a second display device, position information of a target indicated based on current posture information of the camera in response to an indication of the target on the three dimensional panorama image.2. The operation support method according to claim 1 , further comprising:calculating an aspect ratio from a focal length of the camera;calculating a field angle based on the focal length and an optical center of the camera;generating a frustum of the camera by using the calculated aspect ratio and the field angle;converting an image into a three dimensional image by conducting a texture mapping of the image based on the generated frustum; andgenerating the three dimensional panorama image by arranging the three dimensional image in accordance with a direction indicated by the posture information and by overlapping a plurality of the three dimensional images based on the feature point map,wherein the posture information indicates a vertical angle and a horizontal angle.3. The operation support method according to claim 2 , further comprising:specifying a closest feature point to a position ...

Подробнее
15-03-2018 дата публикации

Devices, systems, and methods for generating multi-modal images of a synthetic scene

Номер: US20180077376A1
Принадлежит: Canon Inc

Devices, systems, and methods obtain an object model, add the object model to a synthetic scene, add a texture to the object model, add a background plane to the synthetic scene, add a support plane to the synthetic scene, add a background image to one or both of the background plane and the support plane, and generate a pair of images based on the synthetic scene, wherein a first image in the pair of images is a depth image of the synthetic scene, and wherein a second image in the pair of images is a color image of the synthetic scene.

Подробнее
05-03-2020 дата публикации

PORTABLE DEVICE FOR ACQUIRING IMAGES OF AN ENVIRONMENT

Номер: US20200077079A1
Принадлежит: SOLETANCHE FREYSSINET

Portable device () for acquiring images of an environment, in particular a tunnel (), the device comprising an acquiring module () comprising a rod () and at least two acquiring stages (-) placed at different heights on the rod, each acquiring stage comprising a plurality of cameras () configured to each acquire an image of the scene, the viewing axes () of the cameras of an acquiring stage being angularly distributed about the axis of the rod so that the acquired images overlap angularly. 1. A portable device for acquiring images of an environment , in particular a tunnel , the device comprising:an acquiring module including a rod and at least two acquiring stages placed at different heights on the rod,each of the at least two acquiring stages including a plurality of cameras configured to each acquire an image of a scene, viewing axes of the cameras of one of the at least two acquiring stages being angularly distributed over an axis of the rod so that the acquired images overlap angularly, a spacing between the at least two acquiring stages being adjustable.2. The device according to claim 1 , wherein the rod is carried by an operator and includes claim 1 , in a lower portion thereof claim 1 , a foot in contact with a ground.3. The device according to claim 1 , wherein the rod includes at least three acquiring stages.4. The device according to claim 1 , wherein the cameras of each of the at least two stages are distributed over a longitudinal axis of the rod claim 1 , and over a total angular sector comprised between 90° and 210°.5. The device according to claim 1 , wherein the cameras of each of the at least two acquiring stages are fixed with respect to one another.6. The device according to claim 1 , wherein the acquiring module includes at least six cameras.7. The device according to claim 1 , wherein at least one of the at least two acquiring stages includes a casing in which the cameras of the at least one of the at least two acquiring stages are housed.8. ( ...

Подробнее
18-03-2021 дата публикации

METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM FOR DISPLAYING CHARGING STATE IN START OF CHARGING

Номер: US20210083498A1
Принадлежит:

Provided is an electronic device including: a battery, a display, at least one processor, and a memory, the memory storing instructions that, when executed, cause the at least one processor to: identify a charging scheme for the battery and a state of the display upon detection of a charging start event, display a graphic object on the display using a first display scheme indicating a first charging scheme based on the state of the display based on the charging scheme being the first charging scheme, and display the graphic object on the display using a second display scheme indicating a second charging scheme based on the state of the display based on the charging scheme being a second charging scheme. 1. An electronic device comprising:a battery;a display;at least one processor; anda memory,wherein the memory stores instructions which, when executed, cause the at least one processor to:based on a charging start event being detected, identify a charging scheme for the battery and a state of the display;in response to the charging scheme being a first charging scheme, display a graphic object on the display using a first display scheme indicating the first charging scheme based on the state of the display; andin response to the charging scheme being a second charging scheme, display the graphic object on the display using a second display scheme indicating the second charging scheme based on the state of the display.2. The electronic device of claim 1 , wherein the first display scheme includes a scheme in which at least one of a first movement speed related to the first charging scheme claim 1 , a color of the graphic object claim 1 , or a color effect of the graphic object changes claim 1 , andthe second display scheme includes a scheme in which at least one of a second movement speed related to the second charging scheme, the color of the graphic object, or the color effect of the graphic object changes.3. The electronic device of claim 1 , wherein the first ...

Подробнее
22-03-2018 дата публикации

SELECTIVELY DETERIORATE EBOOK FOR SECONDARY MARKET

Номер: US20180082403A1
Принадлежит:

A method, an eBook, and an apparatus are provided. The method includes calculating, by a processor, a usage metric describing a timing at which an eBook has been displayed for viewing by a user. The method further includes selectively deteriorating, by the processor, a look of the eBook for a secondary market based on the usage metric of the eBook. 1. A method , comprising:calculating, by a processor, a usage metric describing a timing at which an eBook has been displayed for viewing by a user, the usage metric including a duration during which portions of the eBook have been displayed for viewing; andselectively deteriorating, by the processor, a displayed look of the eBook for a secondary market based on the usage metric of the eBook.2. The method of claim 1 , wherein the timing corresponds to a duration during which the eBook has been displayed for viewing.3. The method of claim 1 , wherein the timing corresponds to a number of times the eBook has been displayed for viewing.4. The method of claim 1 , wherein the timing corresponds to a time of day when the eBook is viewed by the user.5. The method of claim 1 , wherein the usage metric is further based on how many bookmarks have been used for the eBook.6. The method of claim 1 , wherein the usage metric is further based on an amount of light exposure to which the eBook was subjected.7. The method of claim 1 , wherein the usage metric is further based on whether or how many times the eBook has been resold.8. The method of claim 1 , wherein said deteriorating step comprises selecting a subset of deterioration filters to apply to the eBook from a set of deterioration filters that provide different types of deterioration.9. The method of claim 1 , wherein the look of the eBook is selectively deteriorated in different ways based on the usage metric.10. The method of claim 9 , wherein the method is further applied to another copy of the eBook such that the eBook and the other copy of the eBook differ in deterioration ...

Подробнее
25-03-2021 дата публикации

INTRACARDIAC ELECTROCARDIOGRAM PRESENTATION

Номер: US20210085200A1
Принадлежит:

In one embodiment, a medical system includes a catheter to be inserted into a chamber of a heart of a living subject, and including catheter electrodes to contact tissue at respective locations within the chamber of the heart, a display, and processing circuitry to receive signals from the catheter, and in response to the signals, sample respective voltage values of the signals at respective timing values, and render to the display respective intracardiac electrograms (IEGM) presentation strips representing electrical activity in the tissue that is sensed by the catheter electrodes at the respective locations, each of the IEGM presentation strips including a linear array of respective shapes associated with, and arranged in a temporal order of, respective ones of the timing values, fillers of the respective shapes being formatted responsively to respective ones of the sampled voltage values of a respective one of the signals sampled at respective ones of the timing values. 1. A medical system , comprising:a catheter configured to be inserted into a chamber of a heart of a living subject, and including catheter electrodes configured to contact tissue at respective locations within the chamber of the heart;a display; and sample respective voltage values of the signals at respective timing values; and', 'render to the display respective intracardiac electrograms (IEGM) presentation strips representing electrical activity in the tissue that is sensed by the catheter electrodes at the respective locations, each of the IEGM presentation strips including a linear array of respective shapes associated with, and arranged in a temporal order of, respective ones of the timing values, fillers of the respective shapes being formatted responsively to respective ones of the sampled voltage values of a respective one of the signals sampled at respective ones of the timing values., 'processing circuitry configured to receive signals from the catheter, and in response to the signals2 ...

Подробнее
23-03-2017 дата публикации

DETECTED OBJECT TRACKER FOR A VIDEO ANALYTICS SYSTEM

Номер: US20170083790A1
Принадлежит:

Techniques are disclosed which provide a detected object tracker for a video analytics system. As disclosed, the detected object tracker provides a robust foreground object tracking component for a video analytics system which allow other components of the video analytics system to more accurately evaluate the behavior of a given object (as well as to learn to identify different instances or occurrences of the same object) over time. More generally, techniques are disclosed for identifying what pixels of successive video frames depict the same foreground object. Logic implementing certain functions of the detected object tracker can be executed on either a conventional processor (e.g., a CPU) or a hardware acceleration processing device (e.g., a GPU), allowing multiple camera feeds to be evaluated in parallel. 1. A computer-implemented method for tracking foreground objects depicted in a scene , the method comprising:launching a tracker component of a video analytics system executed on one or more processing units of the video analytics system;performing, by the tracker component, a geometric matching between a first set of bounded regions associated with a current video frame of the scene and a second set of bounded regions associated with a previous video frame of the scene, wherein the geometric matching assigns one or more of the first set of bounded regions to a known set of foreground objects and one or more of the first set of bounded regions to a discovered set of foreground objects;retrieving, by the tracker component, a third set of bounded regions, each corresponding to a missing foreground objects tracked in one or more previous video frames of the scene; andprocessing, on one or more processing pipelines of a hardware acceleration device, one or more of the bounded regions in the known set of foreground objects, in the discovered set of foreground objects, and in the missing set of foreground objects to determine a set of tracked foreground objects ...

Подробнее
14-03-2019 дата публикации

VEHICLE VISION SYSTEM WITH CUSTOMIZED DISPLAY

Номер: US20190082157A1
Автор: Pflug Goerg
Принадлежит:

A vision system for a vehicle includes a plurality of cameras and a processor operable to process image data captured by the cameras to generate images derived from image data captured by at least some of the cameras. The processor generates a vehicle representation. A display screen, viewable by a driver of the vehicle, displays the generated images and the vehicle representation as would be viewed from a virtual camera viewpoint exterior to and higher than the vehicle itself. A portion of the vehicle representation is rendered as displayed to be at least partially transparent to enable viewing at the display screen of an object present exterior of the vehicle that would otherwise be partially hidden by non-transparent display of that portion of the vehicle representation. 1. A method for displaying images representative of an environment at least partially surrounding a vehicle , said method comprising:disposing a plurality of cameras at a vehicle;wherein disposing the plurality of cameras comprises disposing a front camera at the vehicle, and wherein the front camera, when disposed at the vehicle, has a field of view at least forward of the vehicle;wherein disposing the plurality of cameras comprises disposing a rear camera at the vehicle, and wherein the rear camera, when disposed at the vehicle, has a field of view at least rearward of the vehicle;wherein disposing the plurality of cameras comprises disposing a driver side camera at the vehicle, and wherein the driver side camera, when disposed at the vehicle, has a field of view at least sideward of the vehicle;wherein disposing the plurality of cameras comprises disposing a passenger side camera at the vehicle, and wherein the passenger side camera, when disposed at the vehicle, has a field of view at least sideward of the vehicle;providing a processor for processing image data captured by the plurality of cameras;providing a display screen in the vehicle so as to be viewable by a driver of the vehicle; ...

Подробнее
24-03-2016 дата публикации

IMAGING APPARATUS AND METHOD

Номер: US20160088238A1
Принадлежит: MBDA UK LIMITED

An imaging apparatus and method are provided for improving discrimination between parts of a scene enabling enhancement of an object in the scene. A camera unit () is arranged to capture first and second images from the scene () in first and second distinct and spectrally spaced apart wavebands. An image processing unit () processes the images so captured and processes polarimetric information in the images to enable better discrimination between parts of the scene. An image of the scene, including a graphical display of the polarimetric information, may be displayed on a visual display unit () thus enhancing an object in the scene for viewing by a user. Correlation parameters indicating, possibly on a pixel-by-pixel basis, the correlation between the actual image intensity () at each angle of polarisation and a modelled expected image intensity may be used to enhance the visibility of an object. 1. Imaging apparatus for improving discrimination between parts of a scene , wherein the imaging apparatus comprisesat least one camera arranged to capture a first image from the scene at a first region of wavelengths, corresponding to a first waveband, and a second image from the scene at a second region of wavelengths, corresponding to a second distinct waveband,an image processing unit for processing the images so captured, anda visual display unit, whereinthe visual display unit is configured to display an image of the scene,the imaging apparatus is so arranged that the images detected by the at least one camera include polarimetric information that can be extracted by the image processing unit,the image processing unit is arranged to use the polarimetric information to enhance the image of the scene displayed by the visual display unit so as to provide better discrimination between parts of the scene,the enhancement added by the image processing unit being controllable by a user.2. An apparatus according to claim 1 , wherein the image processing unit is arranged to ...

Подробнее
12-03-2020 дата публикации

METHODS, DEVICES, AND SYSTEMS FOR DESIGNING AND CUSTOMIZING A VIRTUAL DECOR

Номер: US20200082030A1
Автор: Adams Wesley T.
Принадлежит:

Methods, devices, and systems for providing a virtual construction and design of an interior wall of a home or office are described. The virtual construction and design provides for various substrate layers as they would exist within the interior wall, interactions between substrate layers, as well as environmental effects. The virtual construction can be implemented on a website so that a user of the website is able to create, select, and purchase customized wallpaper patterns with various color effects, three-dimensional effects, aging effects, texture effects, and environmental effects. 1. A computer-implemented method for virtual design of wallpaper , comprising:providing a plurality of cross-layers;providing one or more interactions between cross-layers and/orproviding one or more environmental and/or aging variables which affect one or more of the cross-layers;wherein each cross-layer represents a substrate in a virtual construction; andone of the cross-layers represents wallpaper comprising a pattern.2. The method of claim 1 , wherein the cross-layers represent substrates comprising one or more of plywood claim 1 , concrete claim 1 , stucco claim 1 , paint claim 1 , linen claim 1 , paper claim 1 , cardboard claim 1 , ink mask claim 1 , ink claim 1 , varnish claim 1 , vinyl claim 1 , adobe claim 1 , aluminum claim 1 , steel claim 1 , copper claim 1 , other metals claim 1 , cement claim 1 , brick claim 1 , drywall claim 1 , plaster claim 1 , gypsum board claim 1 , paint claim 1 , wood finish claim 1 , paint finish claim 1 , veneer claim 1 , marble claim 1 , ceramic claim 1 , stone claim 1 , plastics claim 1 , foam claim 1 , fabric claim 1 , glass claim 1 , fiberglass claim 1 , or any combination of these.3. The method of claim 1 , wherein the environmental variables comprise water claim 1 , sunlight claim 1 , florescent lighting claim 1 , oxygen claim 1 , dirt claim 1 , smoke claim 1 , wear and tear claim 1 , or any combination of these.4. The method of claim 1 ...

Подробнее
12-03-2020 дата публикации

Method and System for Virtual Sensor Data Generation with Depth Ground Truth Annotation

Номер: US20200082622A1
Принадлежит:

Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment. 1. A method , comprising: generating, by a processor, a virtual environment with a virtual sensor therein;', 'positioning, by the processor, the virtual sensor on a mobile virtual object in the virtual environment; and', 'generating, by the processor, simulation-generated data characterizing the virtual environment as perceived by the virtual sensor as the mobile virtual object and the virtual sensor move around in the virtual environment,, 'performing a process to develop, test or train a computer vision detection algorithm by modeling a real-word environment with a virtual environment, the process comprisingwherein the simulation-generated data represents information collected by one or more real-word sensors in the real-word environment.2. The method of claim 1 , wherein the virtual environment comprises a plurality of virtual objects distributed therewithin claim 1 , each of the virtual objects either stationary or mobile relative to the virtual sensor claim 1 , and each of the virtual objects sensible by the virtual sensor.3. The method of claim 2 , wherein the spatial relationship comprises distance information of one or more of the plurality of virtual objects with respect to the virtual sensor.4. The method of claim 2 , wherein the virtual sensor comprises a virtual ...

Подробнее
29-03-2018 дата публикации

METHOD OF ANALYZING AIR QUALITY

Номер: US20180087919A1
Автор: Bertaux Gregory
Принадлежит:

A method for identifying particulates in the air, including drawing a predetermined volume of air into a housing defining an airway, flowing the predetermined volume of air over a first adhesive capture member positioned in the airway to yield a first test sample, generating a first optical image of the first test sample with a camera, storing the first optical image in a memory, analyzing the first optical image with the microprocessor to identify captured particulates, automatically counting the identified particulates, storing the first adhesive capture member to preserve the first test sample, and positioning a second adhesive capture member in the airway. 1. A method for air monitoring and analysis comprising: i. a housing having an inlet and an outlet and defining an airflow pathway therebetween;', 'ii. an air flow actuator positioned within the housing for urging air flow from the inlet through the air flow pathway and out the outlet;', 'iii. a particulate collection device positioned in the air flow pathway;', 'iv. an image capture device positioned to optically interrogate the particulate collection device; and', 'v. a transceiver for receiving images from the image capture device and for transmitting said images to a remote location;, 'a. providing an air quality analysis system, said air quality analysis system further comprisingb. drawing air into inlet;c. flowing air over the particulate capture device to yield a test sample;d. optically interrogating the test sample to generate test data;e. transmitting test data to a remote location.2. The method for air monitoring and analysis of claim 1 , wherein the particulate capture device is a membrane having an adhesive surface claim 1 , wherein the adhesive surface is positioned to face the inlet.3. The method for air monitoring and analysis of further comprising the steps of:f. analyzing the test data at the remote location; andg. producing an air quality analysis report.4. The method for air monitoring and ...

Подробнее
25-03-2021 дата публикации

Virtuality-reality overlapping method and system

Номер: US20210090339A1
Принадлежит: INSTITUTE FOR INFORMATION INDUSTRY

A virtuality-reality overlapping method is provided. A point cloud map related to a real scene is constructed. Respective outline border vertexes of a plurality of objects are located by using 3D object detection. According to the outline border vertexes of the objects, the point cloud coordinates of the final candidate outline border vertexes are located according to the screening result of a plurality of projected key frames. Then, the point cloud map is projected to the real scene for overlapping a virtual content with the real scene.

Подробнее
25-03-2021 дата публикации

Data processing for augmented reality

Номер: US20210090348A1
Принадлежит: Apical Ltd, ARM LTD

A method of data processing for an augmented reality system. The method comprises obtaining augmented reality data output by an augmented reality application operating at a second trust level. The augmented reality data is for modifying a representation of a real-world environment for a user of the augmented reality system. The method also comprises obtaining object recognition data determined by an object recognition system operating at a first trust level. The object recognition data comprises an indication of an object belonging to a predetermined class of objects being present in the real-world environment. The method also comprises triggering modification of the augmented reality data in response to the object recognition data, based on prioritization of the first trust level over the second trust level.

Подробнее
31-03-2016 дата публикации

High Dynamic Range Image Composition Using Multiple Images

Номер: US20160093029A1
Принадлежит:

High dynamic range image composition is described using multiple images. Some embodiments relate to a system with a buffer to receive each of three different images of a scene, each image having a different amount of light exposure to the scene, as general purpose processor to estimate the alignment between the three images, and an imaging processor to warp the images based on the estimated alignment and to combine the three images to produce a single high dynamic range image. 1. A high dynamic range image processing system comprising:a buffer to receive each of three different images of a scene, each image having a different amount of light exposure to the scene;a general purpose processor to estimate the alignment between the three images; andan imaging processor to warp the images based on the estimated alignment and to combine the three images to produce a single high dynamic range image.2. The system of claim 1 , wherein the general purpose processor estimates alignment for a second pairing of the three images while the imaging processor warps images for a first pairing of the three images.3. The system of claim 1 , wherein the imaging processor groups the three images into two pairs claim 1 , a first pair and a second pair claim 1 , each pair including a reference image selected from the three images claim 1 , and wherein the general purpose processor operates on the second pair while the imaging processor operates on the first pair.4. The system of claim 3 , wherein the general purpose processor estimates pairwise image alignment.5. The system of claim 4 , wherein the imaging processor performs pairwise image warping claim 4 , de-ghosting claim 4 , and chroma processing.6. The system of claim 1 , wherein the general purpose processor is a central processing unit and the imaging processor is incorporated into a camera module.722.-. (canceled) The present description pertains to compositing multiple images to create a high dynamic range image.Small digital ...

Подробнее
31-03-2016 дата публикации

MULTI-SPECTRAL IMAGE LABELING WITH RADIOMETRIC ATTRIBUTE VECTORS OF IMAGE SPACE REPRESENTATION COMPONENTS

Номер: US20160093056A1
Автор: Ouzounis Georgios
Принадлежит:

Automatic characterization or categorization of portions of an input multispectral image based on a selected reference multispectral image. Sets (e.g., vectors) of radiometric descriptors of pixels of each component of a hierarchical representation of the input multispectral image can be collectively manipulated to obtain a set of radiometric descriptors for the component. Each component can be labeled as a (e.g., relatively) positive or negative instance of at least one reference multispectral image (e.g., mining materials, crops, etc.) through a comparison of the set of radiometric descriptors of the component and a set of radiometric descriptors for the reference multispectral image. Pixels may be labeled (e.g., via color, pattern, etc.) as positive or negative instances of the land use or type of the reference multispectral image in a resultant image based on components within which the pixels are found. 1. A method for use in classifying areas of interest in overhead imagery , comprising:organizing, using a processor, a plurality of pixels of at least one input multispectral image of a geographic area into a plurality of components of a hierarchical image representation;deriving, using the processor, at least one set of radiometric descriptors for each component of the plurality of components;obtaining at least one set of radiometric descriptors for a reference multispectral image, wherein pixels of the reference multispectral image identify at least one land use or land type;determining, using the processor, for each component of the hierarchical image representation structure, a similarity metric between the set of radiometric image descriptors for the component and the set of radiometric image descriptors of the reference multispectral image, wherein the determined similarity metrics indicate a degree to which the pixels of each component identify the at least one land use or land type of the reference multispectral image.2. The method of claim 1 , further ...

Подробнее
31-03-2016 дата публикации

Deep image identifiers

Номер: US20160093112A1
Принадлежит: Lucasfilm Entertainment Co Ltd

A method may include receiving a plurality of objects from a 3-D virtual scene. The plurality of objects may be arranged in a hierarchy. The method may also include generating a plurality of identifiers for the plurality of objects. The plurality of identifiers may include a first identifier for a first object in the plurality of objects, and the identifier may be generated based on a position of the first object in the hierarchy. The method may additionally include performing a rendering operation on the plurality of objects to generate a deep image. The deep image may include a plurality of samples that correspond to the first object. The method may further include propagating the plurality of identifiers through the rendering operation such that each of the plurality of samples in the deep image that correspond to the first object are associated with the identifier.

Подробнее