Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 4504. Отображено 100.
11-10-2012 дата публикации

Method for tuning patient-specific cardiovascular simulations

Номер: US20120259608A1
Принадлежит: Leland Stanford Junior University

Computational methods are used to create cardiovascular simulations having desired hemodynamic features. Cardiovascular modeling methods produce descriptions of blood flow and pressure in the heart and vascular networks. Numerical methods optimize and solve nonlinear equations to find parameter values that result in desired hemodynamic characteristics including related flow and pressure at various locations in the cardiovascular system, movements of soft tissues, and changes for different physiological states. The modeling methods employ simplified models to approximate the behavior of more complex models with the goal of to reducing computational expense. The user describes the desired features of the final cardiovascular simulation and provides minimal input, and the system automates the search for the final patient-specific cardiovascular model.

Подробнее
06-12-2012 дата публикации

Global Composition System

Номер: US20120306912A1
Принадлежит: Microsoft Corp

A global composition system is described. In one or more implementations, the global composition system may be configured to perform rendering for a plurality of applications. For example, the global composition system may be configured to expose one or more application programming interfaces (APIs) that are accessible to the applications. The APIs may then be used to cause a single composition engine to perform the rendering for the plurality of applications. The use of a single composition engine may be used to support a variety of different functionality, such as to perform efficient rendering by knowing what elements are provided by each of the applications and how those items relate for rendering to a display device.

Подробнее
27-12-2012 дата публикации

Boundary Handling for Particle-Based Simulation

Номер: US20120330628A1
Принадлежит: Siemens Corp

Boundary handling is performed in particle-based simulation. Slab cut ball processing defines the boundary volumes for interaction with particles in particle-based simulation. The slab cut balls are used for collision detection of a solid object with particles. The solid object may be divided into a plurality of independent slab cut balls for efficient collision detection without a bounding volume hierarchy. The division of the solid object may be handled in repeating binary division operations. Processing speed may be further increased by determining the orientation of each slab cut ball based on the enclosed parts of the boundary rather than testing multiple possible orientations.

Подробнее
07-02-2013 дата публикации

System and method for animating collision-free sequences of motions for objects placed across a surface

Номер: US20130033501A1
Принадлежит: Autodesk Inc

Embodiments of the invention set forth a technique for animating objects placed across a surface of a graphics object. A CAD application receives a set of motions and initially applies a different motion in the set of motions to each object placed across the surface of the graphics object. The CAD application calculates bounding areas of each object according to the current motion applied thereto, which are subsequently used by the CAD application to identify collisions that are occurring or will occur between the objects. Identified collisions are cured by identifying valid motions in the set of motions that can be applied to a colliding object and then calculating bounding areas for the valid motions to select a valid motion that, when applied to the object, does not cause the object to collide with any other objects.

Подробнее
27-06-2013 дата публикации

Computing the mass of an object

Номер: US20130163836A1
Принадлежит: STMICROELECTRONICS SRL

The mass of an object may be estimated based on intersection points of a representation of a surface in an image space with cubes defining the image space, the surface representing a surface of an object. The representation may be, for example, based on marching cubes. The mass may be estimated by estimating a mass contribution of a first set of cubes contained entirely within the representation of the surface, estimating a mass contribution of a second set of cubes having intersection points with the representation of the surface, and summing the estimated mass contribution of the first set of cubes and the estimated mass contribution of the second set of cubes. The object may be segmented from other portions of an image prior to estimating the mass of the object.

Подробнее
18-07-2013 дата публикации

Systems and Methods of Analysis of Granular Elements

Номер: US20130185035A1
Автор: Jose Andrade, Keng-Wit Lim

Systems and methods are described for performing mechanical analysis of particulate systems by describing particle morphology of particles within the system using Non-Uniform Rational Basis Spline (NURBS). One embodiment includes generating a NURBS description for the particle morphology of a plurality of particles within a particulate system, determining contact points between at least two particles based on the NURBS description, determining a magnitude of the contact between the at least two particles based on the NURBS description, determining normal forces and associated moments based upon the contact points and the magnitude of the contact between the at least two particles, determining tangential forces and associated moments based upon the contact points and the magnitude of the contact between the at least two particles, and performing mechanical analysis of the particulate system based on the contact between the at least two particles and the resulting forces and associated moments.

Подробнее
28-11-2013 дата публикации

Automatic flight control for uav based solid modeling

Номер: US20130317667A1
Автор: Ezekiel Kruglick
Принадлежит: EMPIRE TECHNOLOGY DEVELOPMENT LLC

Technologies are generally described for controlling a flight path of a UAV based image capture system for solid modeling. Upon determining an initial movement path based on an estimate of a structure to be modeled, images of the structure to be modeled may be captured and surface hypotheses formed for unobserved surfaces based on the captured images. A normal vector and a viewing cone may be computed for each hypothesized surface. A set of desired locations may be determined based on the viewing cones for the entire structure to be modeled and a least impact path for the UAV determined based on the desired locations and desired flight parameters.

Подробнее
03-04-2014 дата публикации

Notification system for providing awareness of an interactive surface

Номер: US20140091937A1
Принадлежит: AT&T INTELLECTUAL PROPERTY I LP

A system for providing awareness of an interactive surface is disclosed. The system may include a processor that is communicatively linked to an interactive surface. The processor may determine a position and a velocity of an object that is within range of the interactive surface based on one or more of media content, vibrations, air movement, sounds and, global positioning data associated with the object. Additionally, the processor may determine if the object has a trajectory that would cause the object to collide with the interactive surface based on the information associated with the object. If the processor determines that the object has a trajectory that would cause the object to collide with the interactive surface, the processor can generate a notification.

Подробнее
07-01-2016 дата публикации

Apparatus of non-touch optical detection of vital signs from multiple filters

Номер: US20160000381A1
Принадлежит: ARC Devices Ltd

A microprocessor is operably coupled to a camera from which patient vital signs are determined. A temporal variation of images from the camera is generated from multiple filters and then amplified from which the patient vital sign, such as heart rate or respiratory rate, can be determined and then displayed or stored.

Подробнее
05-01-2017 дата публикации

METHOD AND APPARATUS FOR FREEFORM CUTTING OF DIGITAL THREE DIMENSIONAL STRUCTURES

Номер: US20170004659A1
Принадлежит:

A method of editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time is disclosed, wherein the structure and one or more texture are processed and output same in a user interface, and user input is read in the user interface and processed into a cut shape of the three-dimensional structure. A simplified structure is generated based on the three-dimensional structure, and points of the cut shape are associated with the simplified structure to generate a curve. Points of the curve corresponding to edges of the curve on the simplified structure are determined, and geometrical characteristics and texture coordinates of the new points calculated. A new three dimensional structure is generated along the curve and layers of the structure are joined, for the cut and layered structure to be rendered in the user interface. An apparatus embodying the method is also disclosed. 1. An apparatus for editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time , comprising:a. data storage means adapted to store the digital three-dimensional structure and the one or more two-dimensional texture ; process the stored digital three-dimensional structure and the one or more two-dimensional texture and output same in a user interface,', 'read user input in the user interface and process user input data into a cut shape of the three-dimensional structure,', 'generate a simplified structure based on the three-dimensional structure, associate points of the cut shape with the simplified structure to generate a curve,', 'determine new points of the curve corresponding to edges of the curve on the simplified structure, calculate geometrical characteristics and texture coordinates of the new points, and', 'generate a new three dimensional structure along the curve and join layers of the structure ; and, 'b. data processing means adapted to'}c. display means for displaying the user interface.2 ...

Подробнее
07-01-2016 дата публикации

Aligning Ground Based Images and Aerial Imagery

Номер: US20160005145A1
Принадлежит:

Systems and methods for aligning ground based images of a geographic area taken from a perspective at or near ground level and a set of aerial images taken from, for instance, an oblique perspective, are provided. More specifically, candidate aerial imagery can be identified for alignment with the ground based image. Geometric data associated with the ground based image can be obtained and used to warp the ground based image to a perspective associated with the candidate aerial imagery. One or more feature matches between the warped image and the candidate aerial imagery can then be identified using a feature matching technique. The matched features can be used to align the ground based image with the candidate aerial imagery. 120.-. (canceled)21. A computer-implemented method , comprising:accessing, by one or more computing devices, a first image of a scene captured from a perspective at or near ground level;accessing, by the one or more computing devices, a second image of the scene, the second image captured from an aerial perspective;projecting an image plane of the first image to an image plane associated with the second image to transform the first image to a warped image, the warped image having a perspective associated with the second image;identifying, by the one or more computing devices, one or more feature matches between the warped image and the second image; anddetermining, by the one or more computing devices, a pose associated with the first image based at least in part on the one or more feature matches.22. The method of claim 21 , wherein the first image is transformed to the warped image based at least in part on geometric data associated with the first image.23. The method of claim 22 , wherein the geometric data comprises a depth map.24. The method of claim 21 , further comprising normalizing claim 21 , by the one or more computing devices claim 21 , the second image before identifying the one or more feature matches between the warped image and ...

Подробнее
07-01-2016 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20160005203A1
Принадлежит:

In a case where a position of a recognized cell is shifted from a position of a ruled line of an actual cell, if the recognized cell is deleted, a part of the ruled line of the actual cell is deleted. According to an aspect of the present invention, straight lines are detected from regions around four sides constituting the recognized cell, and an inside of a region surrounded by the detected straight lines is deleted. 1. An information processing apparatus comprising:a detection unit that detects straight lines from regions around four sides that constitute a recognized cell; anda deletion unit that deletes color information of an inside of a region surrounded by the detected four straight lines.2. The information processing apparatus according to claim 1 , wherein the regions around the four sides constituting the recognized cell are regions enlarged in orthogonal directions to respective sides while the respective sides are set as references.3. The information processing apparatus according to claim 1 , wherein the straight line is an edge of a ruled line of an original cell corresponding to the recognized cell.4. The information processing apparatus according to claim 3 ,wherein the detection unit includesan edge detection unit that detects edge pixels from the regions around the four sides, anda ruled line detection unit that detects the straight lines based on a number of duplications of lines passing through the detected respective edge pixels.5. The information processing apparatus according to claim 4 ,wherein, in a case where a plurality of lines are detected from the region around one side, the ruled line detection unit detects an innermost line as the straight line among the plurality of lines while a center position of the recognized cell is set as a reference.6. The information processing apparatus according to claim 1 , wherein the recognized cell is a circumscribed rectangle of a white pixel block detected from a table region included in a scanned ...

Подробнее
07-01-2021 дата публикации

LIVE CUBE PREVIEW ANIMATION

Номер: US20210005002A1
Принадлежит:

Rendering potential collisions between virtual objects and physical objects if animations are implemented. A method includes receiving user input selecting a virtual object to be animated. The method further includes receiving user input selecting an animation path for the virtual object. The method further includes receiving user input placing the virtual object to be animated and the animation path in an environment including physical objects. The method further includes, prior to animating the virtual object, displaying the virtual object and the animation path in a fashion that shows the interaction of the virtual object with one or more physical objects in the environment. 1. A computer system comprising:one or more processors; and receiving user input selecting a virtual object to be animated;', 'receiving user input selecting an animation path for the virtual object;', 'receiving user input placing the virtual object to be animated and the animation path in an environment including physical objects; and', 'prior to animating the virtual object, displaying the virtual object and the animation path in a fashion that shows the interaction of the virtual object with one or more physical objects in the environment., 'one or more computer-readable media having stored thereon instructions that are executable by the one or more processors to configure the computer system to render potential collisions between virtual objects and physical objects if animations are implemented, including instructions that are executable to configure the computer system to perform at least the following2. The computer system of claim 1 , wherein the one or more computer-readable media further have stored thereon instructions that are executable by the one or more processors to configure the computer system to perform at least the following:detecting a potential collision between the virtual object and a physical object if the animation were performed; andas a result, highlighting the ...

Подробнее
07-01-2021 дата публикации

APPARATUS AND METHOD FOR EFFICIENTLY STORING RAY TRAVERSAL DATA

Номер: US20210005009A1
Принадлежит:

Apparatus and method for preventing re-traversal of a prior path on a restart. For example, one embodiment of an apparatus comprises: a ray generator to generate a plurality of rays in a first graphics scene; a bounding volume hierarchy (BVH) generator to construct a BVH comprising a plurality of hierarchically arranged nodes, wherein the BVH comprises a specified number of child nodes at a current BVH level beneath a parent node in the hierarchy; traversal/intersection circuitry to traverse one or more of the rays through the hierarchically arranged nodes of the BVH to form a current traversal path and intersect the one or more rays with primitives contained within the nodes; and traversal tracking circuitry to maintain a path encoding array to store path data related to the current traversal path, the path data comprising an index of a currently traversed child node; wherein the traversal/intersection circuitry is to prevent one or more subsequent rays from re-intersecting primitives from which they originated and/or avoid re-traversing the current traversal path based on the path data in the path encoding array. 1. An apparatus comprising:a ray generator to generate a plurality of rays in a first graphics scene;a bounding volume hierarchy (BVH) generator to construct a BVH comprising a plurality of hierarchically arranged nodes, wherein the BVH comprises a number of child nodes at a current BVH level beneath a parent node in the BVH; andcircuitry to traverse one or more of the rays through the hierarchically arranged nodes of the BVH forming a current traversal path and to intersect the one or more of the rays with primitives contained within the nodes, wherein the circuitry is to process entries from the top of a first data structure comprising entries each associated with a child node at the current BVH level, the entries in the first data structure being ordered from top to bottom based on a sorted distance of each respective child node.2. The apparatus of ...

Подробнее
02-01-2020 дата публикации

Compensating for Camera Pose in a Spherical Image

Номер: US20200005433A1
Принадлежит:

A method and apparatus for identifying the orientation of an image in a spherical format. A spherical format is created of an image obtained by a camera, the spherical format comprising a notional sphere that has a centre corresponding to the position from which the image was obtained by the camera. A first surface represented in the image had a first orientation and was at a first distance from the camera when the image was obtained. A plurality of lines are obtained in the spherical format, each line defined by two endpoints in spherical coordinates, wherein each line intersects with at least one other line. For each of a plurality of rotational definitions of the sphere, the spherical coordinates of the endpoints are transformed into a Cartesian coordinate system relative to the rotational definition, thereby creating rotated lines, and a cumulative deviation, from a predetermined angle, is determined of angles between intersecting rotated lines. A preferred rotational definition is identified as the rotational definition that has the smallest cumulative deviation, and it can be used to adjust calculations carried out with respect to spherical coordinates within the spherical format. 1. A method of identifying the orientation of an image in a spherical format , comprising the steps of:creating a spherical format of an image obtained by a camera, wherein said spherical format comprises a notional sphere that has a centre corresponding to the position from which said image was obtained by the camera, and wherein a first surface represented in said image had a first orientation and was at a first distance from the camera when the image was obtained;obtaining a plurality of lines in said spherical format, each line defined by two endpoints in spherical coordinates, wherein each line intersects with at least one other line;for each of a plurality of rotational definitions of said sphere:transforming the spherical coordinates of said endpoints into a Cartesian ...

Подробнее
04-01-2018 дата публикации

SYSTEM AND METHOD FOR PROCESSING DIGITAL VIDEO

Номер: US20180005449A1
Принадлежит:

A computer-implemented method of displaying frames of digital video is provided. The method includes processing contents in one or more predetermined regions of the frames to detect predetermined non-image data. In the event that the predetermined non-image data is undetected within the one or more predetermined regions of a particular frame being processed, subjecting the particular frame to a predetermined texture-mapping onto a predetermined geometry and displaying the texture-mapped frame; and otherwise subjecting the particular frame to cropping to remove the non-image data, flat-projecting the cropped frame and displaying the flat-projected cropped frame. A computer-implemented method of processing digital video is also provided. The method includes causing frames of the digital video to be displayed; for a period beginning prior to an estimated time of display of an event-triggering frame: processing contents in one or more predetermined regions of the frames to detect predetermined non-image data therefrom. In the event that the predetermined non-image data is undetected within the one or more predetermined regions in a particular frame being processed, deeming the particular frame to be the event-triggering frame and executing one or more events associated with the event-triggering frame at the time of display of the event-triggering frame. 1. A computer-implemented method of displaying frames of digital video , the method comprising: in the event that the predetermined non-image data is undetected within the one or more predetermined regions of a particular frame being processed, subjecting the particular frame to a predetermined texture-mapping onto a predetermined geometry and displaying the texture-mapped frame; and otherwise:', 'subjecting the particular frame to cropping to remove the non-image data, flat-projecting the cropped frame and displaying the flat-projected cropped frame., 'processing contents in one or more predetermined regions of the ...

Подробнее
02-01-2020 дата публикации

THREE-DIMENSIONAL BOUNDING BOX FROM TWO-DIMENSIONAL IMAGE AND POINT CLOUD DATA

Номер: US20200005485A1
Принадлежит:

A three-dimensional bounding box is determined from a two-dimensional image and a point cloud. A feature vector associated with the image and a feature vector associated with the point cloud may be passed through a neural network to determine parameters of the three-dimensional bounding box. Feature vectors associated with each of the points in the point cloud may also be determined and considered to produce estimates of the three-dimensional bounding box on a per-point basis. 1. (canceled)2. A computer-implemented method comprising:receiving sensor data comprising a plurality of measurements of an environment;inputting at least a portion of the sensor data into a machine learned model;determining, as a first feature vector and based at least in part on a first portion of the machine learned model, a first set of values associated with a measurement of the plurality of measurements;determining, as a second feature vector and based at least in part on a second portion of the machine learned model, a second set of values associated with the plurality of measurements;combining, as a combined feature vector, the first feature vector and the second feature vector;inputting the combined feature vector into a third portion of the machine learned model; andreceiving, from the third portion of the machine learned model, information associated with an object represented in the sensor data.3. The computer-implemented method of claim 2 , further comprising:receiving, from an image sensor, image data of the environment;determining a portion of the image data associated with the object;determining, based at least in part on the portion of the image data associated with the object, a subset of the sensor data associated with the portion of the image data;inputting the portion of the image data into a fourth portion of the machine learned model;receiving, from the fourth portion of the machine learned model, an appearance feature vector; andinputting the appearance feature vector ...

Подробнее
02-01-2020 дата публикации

SYNTHESIZING AN IMAGE FROM A VIRTUAL PERSPECTIVE USING PIXELS FROM A PHYSICAL IMAGER ARRAY WEIGHTED BASED ON DEPTH ERROR SENSITIVITY

Номер: US20200005521A1
Принадлежит:

A method assigns weights to physical imager pixels in order to generate photorealistic images for virtual perspectives in real-time. The imagers are arranged in three-dimensional space such that they sparsely sample the light field within a scene of interest. This scene is defined by the overlapping fields of view of all the imagers or for subsets of imagers. The weights assigned to imager pixels are calculated based on the relative poses of the virtual perspective and physical imagers, properties of the scene geometry, and error associated with the measurement of geometry. This method is particularly useful for accurately rendering numerous synthesized perspectives within a digitized scene in real-time in order to create immersive, three-dimensional experiences for applications such as performing surgery, infrastructure inspection, or remote collaboration. 1. A method for synthesizing an image corresponding to a virtual perspective of a scene , the method comprising:obtaining a plurality of images of the scene from respective physical imagers;determining an initial predicted surface geometry of the scene;determining, for a virtual pixel of the image corresponding to the virtual perspective of the scene based on the initial predicted surface geometry, a plurality of candidate pixels from different ones of the physical imagers that are predicted to correspond to a same world point in the scene as the virtual pixel;detecting for each of the plurality of candidate pixels, respective sensitivities to an error in the initial predicted surface geometry;determining pixel weights for each of the plurality of candidate pixels based on the respective sensitivities; anddetermining a value for the virtual pixel based on a weighted combination of the plurality of candidate pixels using the pixel weights.2. The method of claim 1 , wherein determining the respective sensitivities comprises:determining a plurality of error surfaces corresponding to different depth errors in the ...

Подробнее
02-01-2020 дата публикации

APPARATUS AND METHOD FOR CONSTRUCTING A VIRTUAL 3D MODEL FROM A 2D ULTRASOUND VIDEO

Номер: US20200005528A1
Принадлежит:

A method for creating a three-dimensional image of an object from a two-dimensional ultrasound video is provided. The method includes acquiring a plurality of two-dimensional ultrasound images of the object and recording a plurality of videos based on the acquired two-dimensional ultrasound images. Each of the videos includes a plurality of frames. The method further includes separating each of the plurality of frames, cropping each of the plurality of frames to isolate structures intended to be reconstructed, selecting a frame near a center of the object and rotating the image to create a main horizontal landmark, and aligning each frame to the main horizontal landmark. The method also includes removing inter-frame jitter by aligning each of the plurality of frames relative to a previous frame of the plurality of frames, reducing the noise of each of the frames, and stacking each of the frames into a three-dimensional volume. 1. A method for creating a three-dimensional image of an object from a two-dimensional ultrasound video , the method comprising:acquiring a plurality of two-dimensional ultrasound images of the object;recording a plurality of videos based on the acquired two-dimensional ultrasound images, each of the plurality of videos comprising a plurality of frames;separating each of the plurality of frames;cropping each of the plurality of frames to isolate structures intended to be reconstructed;selecting a frame near a center of the object and rotating the image to create a main horizontal landmark;aligning each of the plurality of frames to the main horizontal landmark; andstacking each of the aligned plurality of frames into a three-dimensional volume.2. The method according to claim 1 , further comprising:removing inter-frame jitter by aligning each of the plurality of frames relative to a previous frame of the plurality of frames.3. The method according to claim 1 , further comprising:reducing a noise of each of the plurality of frames.4. The method ...

Подробнее
02-01-2020 дата публикации

REAL-TIME COLLISION DEFORMATION

Номер: US20200005537A1
Принадлежит: DreamWorks Animation LLC

Systems and methods deforming a mesh of a target object in real-time in response to a collision with a collision object are disclosed. An embodiment includes determining an inwardly deformed position of a first vertex of the mesh based on an intersection point of a boundary associated with the collision object with a ray, the ray connecting a point of an internal element of the target object with a reference position of the first vertex, wherein the inwardly deformed position of the first vertex corresponds to a first deformation magnitude of the first vertex from the reference position to the inwardly deformed position. 1. A method for deforming a mesh of a target object in real-time in response to a collision with a collision object , the method comprising:determining an inwardly deformed position of a first vertex of the mesh based on an intersection point of a boundary associated with the collision object with a ray, the ray connecting a point of an internal element of the target object with a reference position of the first vertex,wherein the inwardly deformed position of the first vertex corresponds to a first deformation magnitude of the first vertex from the reference position to the inwardly deformed position.2. The method of claim 1 , further comprising:displaying the first vertex at the inwardly deformed position based on the intersection point of the boundary with the ray,wherein the inwardly deformed position of the first vertex is offset from the reference position by the first deformation magnitude.3. The method of claim 1 , further comprising:determining a second deformation magnitude of a second vertex of the mesh based on the first deformation magnitude, a geodesic distance between a reference position of the second vertex and the reference position of the first vertex, and a bulge magnitude based on the geodesic distance; anddetermining an outwardly deformed position of the second vertex based on the determined second deformation magnitude.4. The ...

Подробнее
03-01-2019 дата публикации

APPARATUS HAVING A DIGITAL INFRARED SENSOR

Номер: US20190005642A1
Принадлежит: Arc Devices, LTD

An apparatus that senses temperature from a digital infrared sensor is described. A digital signal representing a temperature without conversion from analog is transmitted from the digital infrared sensor received by a microprocessor and converted to body core temperature by the microprocessor. 1. A device comprising: a microprocessor;', 'a battery that is operably coupled to the microprocessor;', 'a display device that is operably coupled to the microprocessor;', 'a first digital interface that is operably coupled to the microprocessor;, 'a first circuit board including a second digital interface, the second digital interface being that is operably coupled to the first digital interface; and', 'a digital infrared sensor that is operable to receive an infrared signal, the digital infrared sensor also being operably coupled to the second digital interface, the digital infrared sensor having ports that provide digital readout signals that are representative of the infrared signal that is received by the digital infrared sensor,, 'a second circuit board includingwherein the microprocessor is operable to receive from the ports of the digital infrared sensor the digital readout signals that are representative of the infrared signal and the microprocessor is operable to determine a temperature from the digital readout signals that are representative of the infrared signal, andwherein no analog-to-digital converter is operably coupled between the digital infrared sensor and the microprocessor.2. The device of wherein the display device further comprises:a green traffic light operable to indicate that the temperature is good;an amber traffic light operable to indicate that the temperature is low; anda red traffic light operable to indicate that the temperature is high.3. The device of further comprising:the digital infrared sensor having no analog sensor readout ports.4. The device of further comprising:a camera that is operably coupled to the microprocessor and providing ...

Подробнее
03-01-2019 дата публикации

METHODS AND APPARATUS TO DEFINE AUGMENTED CONTENT REGIONS FOR AUGMENTED REALITY SYSTEMS

Номер: US20190005695A1
Принадлежит:

Methods and apparatus to generate augmented content regions for augmented reality (AR) systems are disclosed. An example method includes receiving from a plurality of AR devices data representing a plurality of sight lines captured using the plurality of AR devices, identifying a plurality of commonalities of the plurality of sight lines based on the data representing the plurality of sight lines, and defining an augmented content region based on the plurality of commonalities. 1. A method , comprising:receiving from a plurality of augmented reality (AR) devices data representing a plurality of sight lines captured using the plurality of AR devices;identifying a plurality of commonalities of the plurality of sight lines based on the data representing the plurality of sight lines; anddefining an augmented content region based on the plurality of commonalities.2. The method of claim 1 , wherein identifying the plurality of commonalities includes determining a plurality of intersections of the plurality of sight lines.3. The method of claim 1 , wherein identifying the plurality of commonalities includes determining a plurality of overlaps of the plurality of sight lines.4. The method of claim 1 , further including associating AR content with the defined augmented content region.5. The method of claim 4 , further including providing the AR content to an additional AR device when an additional sight line captured using the additional AR device intersects a surface of the defined augmented content region.6. The method of claim 4 , further including receiving the AR content.7. The method of claim 4 , further including:providing a notification of the augmented content region; andproviding a request for the AR content.8. The method of claim 4 , further including creating the AR content based on an image taken of at least one of a part of the augmented content region claim 4 , the augmented content region claim 4 , near the augmented content region claim 4 , or inside the ...

Подробнее
03-01-2019 дата публикации

TECHNOLOGIES FOR TIME-DELAYED AUGMENTED REALITY PRESENTATIONS

Номер: US20190005723A1
Принадлежит:

Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site. 1. An augmented reality (AR) server for presenting a time-delayed AR presentation , the AR server comprising:a user location mapper to determine a location of a plurality of user AR systems located within a presentation site; andan AR presentation manager to (i) identify an AR sensory stimulus event of an AR presentation to be presented within the presentation site, (ii) determine a time delay of the AR sensory stimulus event for each user AR system based on the location of the corresponding user AR system within the presentation site; and (ii) present the AR sensory stimulus event to each user AR system based on the determined time delay associated with the corresponding user AR system.2. The AR server of claim 1 , further comprising a master network clock claim 1 ,wherein to determine the time delay of the AR sensory stimulus event comprises to synchronize a network clock of each user AR system to the master network clock.3. The AR server of claim 1 , wherein to determine the time delay of the AR sensory stimulus event comprises to determine a time delay of the AR sensory stimulus event for each user AR system based on a ...

Подробнее
05-01-2017 дата публикации

COLLABORATIVE PRESENTATION SYSTEM

Номер: US20170006260A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments of collaborative presentation systems are provided. An example collaborative presentation system includes a display device, an image sensor, a network interface, a logic device, and a storage device holding instructions executable by the logic device to retrieve a presentation file that is executed to display a presentation on the display device, receive image frames from the image sensor, the image frames including the display device, the displayed presentation, and a presenter, and extract the presenter from the image frames to generate an extracted presenter image. The instructions are further executable to adjust an appearance of the extracted presenter image to form an adjusted presenter image, generate an updated presentation file, the updated presentation file being executable to display the presentation overlaid with the adjusted presenter image, and transmit, via the network interface, the updated presentation file to a remote presentation participant device. 1. A collaborative presentation system comprising:a display device;an image sensor configured to image a region including the display device;a network interface;a logic device; and retrieve a presentation file that is executed to display a presentation on the display device, the presentation file including one or more displayable elements;', 'receive a plurality of image frames from the image sensor, each image frame including an image of the display device, an image of a currently-displayed displayable element of the presentation, and an image of a presenter;', extract the image of the presenter from the image frame to generate an extracted presenter image,', 'adjust an appearance of the extracted presenter image to form an adjusted presenter image, and', 'overlay the adjusted presenter image over the currently-displayed displayable element of the presentation;, 'for each image frame of the plurality of received image frames, 'generate an updated presentation file, the updated presentation ...

Подробнее
20-01-2022 дата публикации

Method and Apparatus for Vertex Reconstruction based on Terrain Cutting, Processor and Terminal

Номер: US20220016526A1
Автор: Kunda ZHONG, Yongsheng Ye
Принадлежит: Netease Hangzhou Network Co Ltd

A method and apparatus for vertex reconstruction based on terrain cutting, a processor and a terminal are provided. The method includes: position information of a unit block to be removed is acquired, terrain resources in a game scene are cut into multiple terrain chunks, and each terrain chunk is cut into multiple unit blocks; vertex data of at least one adjacent unit block of the unit block to be removed is determined according to the position information; and at least one triangular patch to be rendered is reconstructed according to the vertex data. The present disclosure solves a technical problem that a terrain change mode provided in the related art is limited within a region of a single plane thereby lacking flexibility and realism.

Подробнее
08-01-2015 дата публикации

REAL-TIME 3D COMPUTER VISION PROCESSING ENGINE FOR OBJECT RECOGNITION, RECONSTRUCTION, AND ANALYSIS

Номер: US20150009214A1
Принадлежит:

Methods and systems are described for generating a three-dimensional (3D) model of a fully-formed object represented in a noisy or partial scene. An image processing module of a computing device receives images captured by a sensor. The module generates partial 3D mesh models of physical objects in the scene based upon analysis of the images, and determines a location of at least one target object in the scene by comparing the images to one or more 3D reference models and extracting a 3D point cloud of the target object. The module matches the 3D point cloud of the target object to a selected 3D reference model based upon a similarity parameter, and detects one or more features of the target object. The module generates a fully formed 3D model of the target object using partial or noisy 3D points from the scene, extracts the detected features of the target object and features of the 3D reference models that correspond to the detected features, and calculates measurements of the detected features. 1. A computerized method for generating a fully-formed three-dimensional (3D) model of an object represented in a noisy or partial scene , the method comprising:receiving, by an image processing module of a computing device, a plurality of images captured by a sensor coupled to the computing device, the images depicting a scene containing one or more physical objects;generating, by the image processing module, partial 3D mesh models of the physical objects in the scene based upon analysis of the plurality of images;determining, by the image processing module, a location of at least one target object in the scene by comparing the plurality of images to one or more 3D reference models and extracting a 3D point cloud of the target object;matching, by the image processing module, the 3D point cloud of the target object to a selected 3D reference model based upon a similarity parameter;detecting, by the image processing module, one or more features of the target object based ...

Подробнее
20-01-2022 дата публикации

METHODS AND SYSTEMS FOR CONSTRUCTING RAY TRACING ACCELERATION STRUCTURES

Номер: US20220020200A1
Автор: Fenney Simon
Принадлежит:

A computer-implemented method of creating a bounding volume hierarchy (BVH) for a model defined with respect to a local coordinate system for the model. The method includes defining BVH branch nodes within the model, establishing a plurality of local transformation matrices for the BVH; and for each BVH branch node, determining a first bounding volume and associating the branch node with one of the plurality of local transformation matrices that maps between the first bounding volume and a second bounding volume in the local coordinate system. 1. A computer-implemented method of creating a bounding volume hierarchy (BVH) for a model defined with respect to a local coordinate system for the model , the method comprising:defining a plurality of BVH nodes within the model,establishing a plurality of local transformation matrices for the BVH; andfor each of the plurality of BVH nodes, determining a first bounding volume, and associating the node with one of the plurality of local transformation matrices that maps between the first bounding volume and a second bounding volume, in the local coordinate system.2. The method according to claim 1 , wherein the plurality of local transformation matrices are a fixed set of matrices for the model that are predetermined before defining the BVH claim 1 , or a fixed set of matrices for the model that are determined claim 1 , at least in part claim 1 , based on an analysis of the plurality of BVH nodes claim 1 , wherein optionally the plurality of local transformation matrices each represent a different claim 1 , optionally affine claim 1 , mapping.3. The method according to claim 1 , wherein determining a first bounding volume comprises selecting a bounding volume from a set of candidate bounding volumes.4. The method according to claim 3 , wherein each candidate bounding volume is associated with a different one of the plurality of local transformation matrices.5. The method according to claim 4 , wherein selecting comprises ...

Подробнее
27-01-2022 дата публикации

NON-BLOCKING TOKEN AUTHENTICATION CACHE

Номер: US20220028160A1
Принадлежит:

Techniques are disclosed relating to a non-blocking token authentication cache. In various embodiments, a server computer system receives a request for service from a client device, with the request including an authentication token issued by an authentication service. The server computer system accesses a cache of previously received validation responses from the authentication service to determine whether one of the validation responses indicates that the authentication token has already been validated by the authentication service. In response to determining that the cache includes a validation response indicating that the authentication token has already been validated by the authentication service, the server computer system first provides a response to the request for service to the client device, and then contacts the authentication service to determine whether the authentication token is still valid. 1. A method , comprising:receiving, by a server computer system from a client device, a request for service, wherein the request includes an authentication token issued by an authentication service;accessing, at the server computer system, a cache of previously received validation responses from the authentication service to determine whether one of the validation responses indicates that the authentication token has already been validated by the authentication service; and providing, to the client device, a response to the request for service, wherein the response is provided based on the validation response and not on token validity information about the authentication token that is stored by the authentication service; and', 'contacting the authentication service to determine whether the authentication token is still valid and should be revalidated., 'in response to determining that the cache includes a validation response indicating that the authentication token has already been validated by the authentication service, the server computer system2. The method ...

Подробнее
11-01-2018 дата публикации

Method and apparatus for receiving a broadcast radio service offer from an image

Номер: US20180012097A1
Принадлежит: Blinker Inc

Some aspects of the invention relate to a mobile apparatus including an image sensor configured to convert an optical image into an electrical signal. The optical image includes an image of a vehicle license plate. The mobile apparatus includes a license plate detector configured to process the electrical signal to recover information from the vehicle license plate image. The mobile apparatus includes an interface configured to transmit the vehicle license plate information to a remote apparatus and receive a broadcast radio service offer corresponding to the vehicle license plate image in response to the transmission.

Подробнее
14-01-2021 дата публикации

Image Evaluation and Dynamic Cropping System

Номер: US20210012132A1
Принадлежит:

Systems for image evaluation and dynamic cropping are provided. In some examples, a system, may receive an instrument or image of an instrument. Identifying information may be extracted from the instrument or image of the instrument. Based on the extracted identifying information, a check/check image profile may be retrieved. In some examples, expected size and/or shape data may be extracted from the check/check image profile. The extracted expected size and/or shape data may be compared to size and/or shape data from the received instrument or image of the instrument to identify any anomalies (e.g., to determine whether the expected size and/or shape data matches the size and/or shape data of the received instrument or image of the instrument. If the expected size and/or shape data does not match size and/or shape data from the received instrument or image of the instrument, the instrument or image of the instrument may be programmatically modified and a modified image of the instrument may be generated. 1. A computing platform , comprising:at least one processor;a communication interface communicatively coupled to the at least one processor; and receive an image of a document;', 'extract, from the received image of the document, identifying information;', 'retrieve, based on the extracted identifying information, a document profile;', 'extract, from the document profile, expected data of the document;', 'compare data of the document in the received image of the document to the extracted expected data of the document;', 'determine, based on the comparing, whether an anomaly exists between the data of the document in the received image and the extracted expected data of the document;', 'responsive to determining that an anomaly does not exist, evaluate validity of the document based on the image of the document; and', programmatically modify, based on one or more machine learning datasets, the received image of the document;', 'generate a modified image of the ...

Подробнее
10-01-2019 дата публикации

METHOD AND APPARATUS FOR IDENTIFYING FRAGMENTED MATERIAL PORTIONS WITHIN AN IMAGE

Номер: US20190012768A1
Принадлежит: Motion Metrics International Corp.

A method and apparatus for processing an image of fragmented material to identify fragmented material portions within the image is disclosed. The method involves receiving pixel data associated with an input plurality of pixels representing the image of the fragmented material. The method also involves processing the pixel data using a convolutional neural network, the convolutional neural network having a plurality of layers and producing a pixel classification output indicating whether pixels in the input plurality of pixels are located at one of an edge of a fragmented material portion, inwardly from the edge, and at interstices between fragmented material portions. The convolutional neural network includes at least one convolution layer configured to produce a convolution of the input plurality of pixels, the convolutional neural network having been previously trained using a plurality of training images including previously identified fragmented material portions. The method further involves processing the pixel classification output to associate identified edges with fragmented material portions. 1. A method for processing an image of fragmented material to identify fragmented material portions within the image , the method comprising:receiving pixel data associated with an input plurality of pixels representing the image of the fragmented material; an edge of a fragmented material portion;', 'inward from the edge of a fragmented material portion; and', 'an interstice between fragmented material portions;, 'processing the pixel data using a convolutional neural network, the convolutional neural network having a plurality of layers and producing a pixel classification output indicating whether pixels in the input plurality of pixels are located at one ofwherein the convolutional neural network includes at least one convolution layer configured to produce a convolution of the input plurality of pixels, the convolutional neural network having been previously ...

Подробнее
12-01-2017 дата публикации

Image Taping in a Multi-camera Array

Номер: US20170013209A1
Принадлежит:

Multiple cameras are arranged in an array at a pitch, roll, and yaw that allow the cameras to have adjacent fields of view such that each camera is pointed inward relative to the array. The read window of an image sensor of each camera in a multi-camera array can be adjusted to minimize the overlap between adjacent fields of view, to maximize the correlation within the overlapping portions of the fields of view, and to correct for manufacturing and assembly tolerances. Images from cameras in a multi-camera array with adjacent fields of view can be manipulated using low-power warping and cropping techniques, and can be taped together to form a final image. 1. A method comprising:capturing a plurality of images with each camera in a camera array comprising a plurality of cameras, each image comprising at least one portion overlapping with a corresponding portion of a corresponding image;aligning overlapping portions of corresponding images to produce a set of aligned images;for each aligned image, performing a warp operation on the aligned image to produce a warped image, wherein a magnitude of the warp operation on a portion of the aligned image increases with an increase in distance from the portion of the aligned image to the nearest overlapping portion of the aligned image;taping each warped image together based on the overlapping portions of the warped images to form a combined image; andcropping the combined image to produce a rectangular final image.2. The method of claim 1 , wherein the plurality of images is captured synchronously.3. The method of claim 1 , wherein the warp operation comprises a plurality of local warps performed on portions of each aligned image.4. The method of claim 1 , wherein a center of the warp operation performed on an aligned image is based on an overlapping portion of the image.5. The method of claim 1 , wherein aligning overlapping portions of corresponding images comprises aligning an object of interest within overlapping portions ...

Подробнее
09-01-2020 дата публикации

METHOD AND ELECTRONIC DEVICE FOR GENERATING AN INDEX OF SEGMENTS OF POLYGONS

Номер: US20200013167A1
Принадлежит:

A method and electronic device for generating an index of segments of a polygon is disclosed. The method comprises segmenting a reference zone, which covers at least a portion of a map enclosing all segments of the polygon, into first level zones. Responsive to at least one segment being at least partially located within more than one first level zones, the method comprises indexing the at least one segment in association with the reference zone. The method also comprises, until a terminal condition is met, iteratively: (i) segmenting a given zone into subsequent level zones, where the given zone is a parent zone to the subsequent level zones, and (ii) responsive to at least one other segment being at least partially located within more than one subsequent level zones, indexing the at least one other segment in association with the given zone. 2. The method of claim 1 , wherein in response to the terminal condition being met claim 1 , the method further comprises:indexing, by the electronic device, the segments located within the only one respective subsequent level zone in association with the only one respective subsequent level zone.3. The method of claim 1 , wherein the terminal condition is met when at least one of:a number of pixels included in at least one lowest level zone is below a pre-determined minimum number of pixels to be enclosed in the at least one lowest level zone; anda number of segments located entirely in the at least one lowest level zone is below a pre-determined minimum number of segments to be located entirely in the at least one lowest level zone.4. The method of claim 1 , wherein the method further comprises:receiving, by the electronic device, an indication of a location of a target point on the map; a lowest level target zone corresponding to the location of the target point based on geo-markers of zones in the index, and', 'the pre-determined direction having been determined based on a geographic association between child zones of a ...

Подробнее
09-01-2020 дата публикации

Efficient Label Insertion and Collision Handling

Номер: US20200013210A1
Принадлежит:

Techniques are described for efficient label insertion and collision handling. A bounding geometry for a label to be graphically displayed on a display screen as part of an electronic map is determined, wherein the bounding geometry comprises a circle. The bounding geometry is inserted into a grid index, wherein the grid index represents a viewport of the electronic map. Disjoint regions of the grid index intersected by the bounding geometry are identified, wherein each disjoint region represents a different portion of the viewport. For each intersected disjoint region, it is identified whether there is at least one collision between the bounding geometry and one or more existing bounding geometries in the disjoint region; and responsive to identifying whether there is at least one collision in the intersected disjoint region, a target opacity of the label is set. 1. A computer-implemented method comprising:determining a bounding geometry for a label as part of an electronic map;inserting the determined bounding geometry into a grid index, wherein the grid index represents a viewport of the electronic map;identifying disjoint regions of the grid index intersected by the determined bounding geometry, wherein different disjoint regions represent different portions of the viewport; identifying a position of the determined bounding geometry within the intersected disjoint region;', 'identifying positions of one or more existing bounding geometries in the disjoint region; and', 'determining whether the identified position of the determined bounding geometry and the identified positions of at least one of the one or more existing bounding geometries overlap; and, 'for one or more intersected disjoint regionsresponsive to identifying that the determined bounding geometry overlaps at least one of the one or more existing bounding geometries, setting a target opacity of the label.2. The method of claim 1 , wherein the determined bounding geometry overlaps at least one of the ...

Подробнее
09-01-2020 дата публикации

CREATING MULTI-DIMENSIONAL OBJECT REPRESENTATIONS

Номер: US20200013219A1
Принадлежит:

Objects can be rendered in three-dimensions and viewed and manipulated in an augmented reality environment. Background images are subtracted from object images from multiple viewpoints to provide baseline representations of the object. Morphological operations can be used to remove errors caused by misalignment of an object image and background image. Using two different contrast thresholds, pixels can be identified that can be said at two different confidence levels to be object pixels. An edge detection algorithm can be used to determine object contours. Low confidence pixels can be associated with the object if they can be connected to high confidence pixels without crossing an object contour. Segmentation masks can be created from high confidence pixels and properly associated low confidence pixels. Segmentation masks can be used to create a three-dimensional representation of the object. 1. A computer-implemented method comprising:under the control of one or more computer systems configured with executable instructions,capturing a background image for each of a plurality of cameras, a background image portraying a background;capturing a plurality of object images, including at least one object image for each of the plurality of cameras, an object image portraying a viewpoint of an object against the background;creating a difference image by subtracting the background image of the viewpoint from the at least one object image of the viewpoint;determining high confidence pixels, the high confidence pixels being pixels that exceed a first threshold contrast with background image;determining low confidence pixels, the low confidence pixels being pixels that exceed a second threshold contrast with the background image, the second threshold contrast being lower than the first threshold contrast;determining pixels associated with the object, including high confidence pixels and a subset of low confidence pixels; andcreating a plurality of segmentation masks ...

Подробнее
15-01-2015 дата публикации

METHOD FOR CUTTING OUT CHARACTER, CHARACTER RECOGNITION APPARATUS USING THIS METHOD, AND PROGRAM

Номер: US20150015603A1
Автор: Fujieda Shiro
Принадлежит: Omron Corporation

A method for cutting out, from a gray-scale image generated by capturing an image of a character string, each character in the character string for recognition, includes a first step of repeating projection processing for projecting a highest or lowest gray level in a line along a direction crossing the character string in the gray-scale image, onto an axis along the character string, with the lowest gray level selected when a character in the gray-scale image is darker than a background, the highest gray level selected when the character in the gray-scale image is brighter than the background, and a projection target position moved along the character string. 1. A method for cutting out , from a gray-scale image generated by capturing an image of a character string , each character in the character string for recognition , the method comprising:a first step of repeating projection processing for projecting a highest or lowest gray level in a line along a direction crossing the character string in the gray-scale image, onto an axis along the character string, with the lowest gray level selected when a character in the gray-scale image is darker than a background, the highest gray level selected when the character in the gray-scale image is brighter than the background, and a projection target position moved along the character string;a second step of extracting a local maximum value and a local minimum value from a projected pattern generated by the first step, and setting, between a variable range of the local maximum value and a variable range of the local minimum value, a straight line inclined in accordance with variation of the values; anda third step of cutting out an image in a cut out target range in the gray-scale image with a range, in which a gray level higher than the straight line in the projected pattern is projected, set as the cut out target range when the highest gray level is projected in the first step, and a range, in which a gray level lower ...

Подробнее
15-01-2015 дата публикации

IMAGE GENERATION DEVICE, CAMERA DEVICE, IMAGE DISPLAY DEVICE, AND IMAGE GENERATION METHOD

Номер: US20150015738A1
Автор: Kuwada Junya
Принадлежит: Panasonic Corporation

A camera device is provided with: an imaging unit for generating an area image obtained by shooting an area from above; and a display image generation unit for generating a display image of a target moving in the area using a clip image which is clipped from the area image. In this case, a rotation angle of a current frame is calculated on the basis of the rotation angle of the previous frame and a reference angle of the current frame. As a result, a rapid change in an orientation of the target displayed in the display image can be suppressed. 1. An image generation device for generating a clip image from a wide-angle image , comprising:a reference angle determination unit for obtaining a reference angle of a clip area in the wide-angle image;a rotation angle storage unit for storing a rotation angle of a previous clip image;a rotation angle calculation unit for obtaining the rotation angle with respect to the clip area on the basis of a previous rotation angle and the reference angle; andan image clip unit for generating the clip image with respect to the clip area on the basis of the rotation angle obtained by the rotation angle calculation unit, whereinthe rotation angle calculation unit executes control so that a change amount of the rotation angle of the clip area does not exceed a predetermined angle.2. The image generation device according to claim 1 , whereinthe change amount of the rotation angle of the clip area is a difference between the previous rotation angle and the reference angle.3. The image generation device according to claim 1 , whereinthe rotation angle calculation unit makes the predetermined angle the change amount of the rotation angle of the clip area when the change amount of the rotation angle of the clip area exceeds the predetermined angle.4. The image generation device according to claim 1 , further comprising:a reference position determination unit for determining a reference position of a clip area in a wide-angle image.5. The image ...

Подробнее
03-02-2022 дата публикации

Vehicular multi-camera surround view system with video display

Номер: US20220032843A1
Автор: Niall R. Lynam
Принадлежит: MAGNA ELECTRONICS INC

A vehicular multi-camera surround view system includes a front camera, a driver-side camera, a passenger-side camera and a rear backup camera. Image data captured by the cameras is conveyed to an electronic control unit. The electronic control unit is operable to combine image data conveyed from the front camera, the driver-side camera, the passenger-side camera and the rear backup camera to form composite video images, which are output for display at a display device. Rear backup video images are displayed no later than two seconds after the driver of the vehicle first changes propulsion of the vehicle during a new ignition cycle to reverse mode. Upon changing propulsion of the vehicle during the new ignition cycle to reverse mode to commence a backup event subsequent to the first backup event, the video display screen displays video images derived, at least in part, from image data captured by the rear backup camera.

Подробнее
11-01-2018 дата публикации

Modification of post-viewing parameters for digital images using image region or feature information

Номер: US20180013950A1
Принадлежит: Fotonation Ireland Ltd

A method of generating one or more new digital images using an original digitally-acquired image including a selected image feature includes identifying within a digital image acquisition device one or more groups of pixels that correspond to the selected image feature based on information from one or more preview images. A portion of the original image is selected that includes the one or more groups of pixels. The technique includes automatically generating values of pixels of one or more new images based on the selected portion in a manner which includes the selected image feature within the one or more new images.

Подробнее
21-01-2021 дата публикации

Digital bone reconstruction method

Номер: US20210015620A1
Принадлежит: Synthes GmbH

A digital bone reconstruction method that involves receiving medical image data of a bone; displaying on a user interface the bone image; automatically generating, using a processor, a first virtual 3D surface contour of a reconstructed image of the bone having a first geometry and including a plurality of editable control regions; and adjusting at least one of the editable control regions on the first virtual 3D surface contour based on user input to produce a second virtual 3D surface contour of the reconstructed image of the bone having a second geometry.

Подробнее
19-01-2017 дата публикации

METHOD AND DEVICE FOR PROCESSING A PICTURE

Номер: US20170018106A1
Принадлежит:

A method for processing a picture comprising at least one face is provided. The method comprises:—obtaining (S) a cropping window in the picture;—processing (S) the picture by cropping the picture part delimited by the cropping window; wherein the method further comprises detecting (S) the at least one face, determining (S) a weight for the detected at least one face and modifying (S) the position of the cropping window in the picture based on the weight, wherein the weight is determined at least based on the size of the corresponding detected face. 2. The method of claim 1 , wherein ordering said plurality of objects of interest in a decreasing order of said weights to form an ordered list of objects of interest is followed by calculating the differences between the weights of two consecutive objects of interest in the ordered list and removing claim 1 , from the ordered list of objects of interest claim 1 , the objects of interest following a difference above a threshold value.3. The method according to claim 1 , wherein determining a weight for each object of interest of said plurality of objects of interest comprises for one object of interest:determining a level of sharpness;determining a level of depth;determining a level of saliency; andcalculating said weight as a linear combination of said level of sharpness, said level of depth, said level of saliency and said size.4. The method according to claim 1 , wherein the object of interest is a face.6. The device of claim 5 , wherein ordering said plurality of objects of interest in a decreasing order of said weights to form an ordered list of objects of interest is followed by calculating the differences between the weights of two consecutive objects of interest in the ordered list and removing claim 5 , from the ordered list of objects of interest claim 5 , the objects of interest following a difference above a threshold value.7. The device according to claim 6 , wherein determining a weight for each object of ...

Подробнее
03-02-2022 дата публикации

METHODS AND APPARATUS FOR PIXEL PACKING

Номер: US20220036634A1
Принадлежит:

A method of packing coverage in a graphics processing unit (GPU) may include receiving an indication for a portion of an image, determining, based on the indication, a packing technique for the portion of the image, and packing coverage for the portion of the image based on the packing technique. The indication may include one or more of: an importance, a quality, a level of interest, a level of detail, or a variable-rate shading (VRS) level. The indication may be received from an application. The packing technique may include array merging. The array merging may include quad merging. The packing technique may include pixel piling. The packing technique may be a first packing technique, and the method may further include determining, based on the indication, a second packing technique for the portion of the image, and packing coverage for the portion of the image based on the second packing technique. 1. A method of processing coverage in a graphics processing unit (GPU) , the method comprising:receiving first coverage for at least a portion of a first primitive;receiving second coverage for at least a portion of a second primitive, wherein the portion of the first primitive and the portion of the second primitive are associated with a portion of an image;receiving an indication for the portion of the image;determining, based on the indication, a technique for combining the first coverage and the second coverage; andcombining the first coverage and the second coverage in an array based on the technique.2. The method of claim 1 , wherein the technique comprises array merging.3. The method of claim 1 , wherein the technique comprises pixel piling.4. The method of claim 1 , wherein the first primitive and the second primitive belong to the same draw call.5. The method of claim 1 , wherein the technique shifts coverage from the first primitive to the second primitive.6. The method of claim 1 , wherein determining comprises:selecting a set of criteria for the technique ...

Подробнее
03-02-2022 дата публикации

SIMD Group Formation Techniques for Primitive Testing associated with Ray Intersect Traversal

Номер: US20220036638A1
Принадлежит: Apple Inc

Disclosed techniques relate to primitive testing associated with ray intersection processing for ray tracing. In some embodiments, shader circuitry executes a first SIMD group that includes a ray intersect instruction for a set of rays. Ray intersect circuitry traverses, in response to the ray intersect instruction, multiple nodes in a spatially organized acceleration data structure (ADS). In response to reaching a node of the ADS that indicates one or more primitives, the apparatus forms a second SIMD group that executes one or more instructions to determine whether a set of rays that have reached the node intersect the one or more primitives. The shader circuitry may execute the first SIMD group to shade one or more primitives that are indicated as intersected based on results of execution of the second SIMD group. Thus, disclosed techniques may use both dedicated ray intersect circuitry and dynamically formed SIMD groups executed by shader processors to detect ray intersection.

Подробнее
03-02-2022 дата публикации

Ray Intersect Circuitry with Parallel Ray Testing

Номер: US20220036639A1
Принадлежит:

Disclosed techniques relate to ray intersection processing for ray tracing. In some embodiments, ray intersection circuitry traverses a spatially organized acceleration data structure and includes bounding region circuitry configured to test, in parallel, whether a ray intersects multiple different bounding regions indicated by a node of the data structure. Shader circuitry may execute a ray intersect instruction to invoke traversal by the ray intersect circuitry and the traversal may generate intersection results. The shader circuitry may shade intersected primitives based on the intersection results. Disclosed techniques that share processing between intersection circuitry and shader processors may improve performance, reduce power consumption, or both, relative to traditional techniques. 1. An apparatus , comprising:graphics shader circuitry configured to executing a ray intersect instruction that indicates origin and direction information for a set of one or more rays in a graphics scene; traverse, in response to the ray intersect instruction, multiple nodes in a spatially organized acceleration data structure, wherein nodes of the data structure indicate coordinates corresponding to bounding regions in the graphics scene;', 'test in parallel, using bounding region test circuitry during the traversal, whether a ray in the set of rays intersects multiple different bounding regions indicated by a node of the data structure; and, 'ray intersect circuitry configured towherein the apparatus is configured to determine, based on the traversal and tests, information specifying one or more graphics primitives intersected by respective rays in the set of one or more rays; andwherein the graphics shader circuitry is configured to shade the specified one or more graphics primitives based on intersecting rays.2. The apparatus of claim 1 , wherein the bounding region test circuitry is configured to test multiple rays in parallel against the multiple different bounding regions ...

Подробнее
03-02-2022 дата публикации

LIGHT PROBE GENERATION METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE

Номер: US20220036643A1
Автор: LIU Dian, QU Yu Cheng

A light probe generation method and apparatus, a storage medium, and a computer device are provided. The method includes: selecting shadow points of a target object in a virtual scene; converting the selected shadow points into a voxelized shadow voxel object; reducing a quantity of vertexes in the shadow voxel object to obtain a shadow voxel object after the reduction; and generating a light probe at a vertex position of the shadow voxel object after the vertex reduction. 1. A light probe generation method , performed by a computer device , the method comprising:selecting shadow points of a target object in a virtual scene;converting the selected shadow points into a voxelized shadow voxel object;reducing a quantity of vertexes in the voxelized shadow voxel object; andgenerating a light probe at a vertex position of a shadow voxel object obtained after vertex reduction.2. The method according to claim 1 , wherein the selecting the shadow points comprises:generating a lattice in a bounding box of the virtual scene; anddetermining, with respect to a light source that projects a light ray from the bounding box to the virtual scene, that the light ray is intersected with points in the lattice and the light ray is intersected with the target object, and selecting the intersected points as the shadow points of the target object.3. The method according to claim 2 , further comprising:emitting a ray in a direction of the light source by using the points in the lattice as reference points; andbased on the ray being intersected with target points in the target object, selecting the target points and corresponding reference points as the shadow points of the target object.4. The method according to claim 2 , further comprising:determining a shadow region formed by a virtual object in the light ray;generating a polyhedron enclosing the shadow region; andthe converting the selected shadow points comprises:converting the polyhedron enclosing the shadow region into the voxelized ...

Подробнее
03-02-2022 дата публикации

IMAGE PROCESSING APPARATUS AND METHOD

Номер: US20220036654A1
Принадлежит: SONY CORPORATION

There is provided an image processing apparatus and an image processing method that are capable of suppressing an increase in loads when a point cloud is generated from a mesh. Point cloud data is generated by positioning points at intersection points between a surface of a mesh and vectors each including, as a start origin, position coordinates corresponding to a specified resolution. For example, intersection determination is performed between the surface of the mesh and each of the vectors, and in a case where the surface and the vector are determined to intersect each other, the coordinates of the intersection point are calculated. The present disclosure can be applied to an image processing apparatus, electronic equipment, an image processing method, a program, or the like. 1. An image processing apparatus comprising:a point cloud generating section that generates point cloud data by positioning a point at an intersection point between a surface of a mesh and a vector including, as a start origin, position coordinates corresponding to a specified resolution.2. The image processing apparatus according to claim 1 , wherein performs intersection determination between the surface and the vector, and', 'in a case of determining that the surface and the vector intersect each other, calculates coordinates of the intersection point., 'the point cloud generating section'}3. The image processing apparatus according to claim 2 , whereinthe point cloud generating section performs the intersection determination between the surface and the vector in each of positive and negative directions of each of three axial directions perpendicular to one another.4. The image processing apparatus according to claim 3 , wherein claim 3 ,in a case where multiple intersection points have overlapping coordinate values, the point cloud generating section deletes all intersection points included in a group of the intersection points overlapping each other, except any one of the intersection ...

Подробнее
19-01-2017 дата публикации

IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE AND DISPLAY SYSTEM

Номер: US20170019595A1
Принадлежит: PROLIFIC TECHNOLOGY INC.

An image processing device including an image obtaining circuit, a storage module and an image processing module is provided. The image obtaining circuit is for receiving a first fisheye image and a second fisheye image. The storage module is for storing a fisheye lens information. The image processing module is coupled to the image obtaining circuit and the storage module for generating a first converted image and a second converted image by converting the first and second fisheye images with panoramic coordinate conversion according to the fisheye lens information and stitching the first and second converted images to generate a panoramic image. 1. An image processing device , comprising:an image obtaining circuit for receiving a first fisheye image and a second fisheye image;a storage module for storing a fisheye lens information; andan image processing module coupled to the image obtaining circuit and the storage module for converting the first and second fisheye images with panoramic coordinate conversion according to the fisheye lens information to generate a first converted image and a second converted image and stitching the first and second converted images to generate a panoramic image.2. The image processing device according to claim 1 , wherein the image processing module comprises:an element for calculating coordinate conversion relationship between the first and second fisheye images and the panoramic image according to the fisheye lens information; andan element for converting the first and second fisheye images into the first and second converted images according to the coordinate conversion relationship.3. The image processing device according to claim 1 , wherein the image processing module comprises:an element for cropping the first and second converted images to generate a first cropped image and a second cropped image respectively;an element for smoothing the edges of the first and second cropped images to generate a first to-be-stitched image ...

Подробнее
17-01-2019 дата публикации

Hybrid Hierarchy of Bounding and Grid Structures for Ray Tracing

Номер: US20190019325A1
Принадлежит:

Methods and ray tracing units are provided for performing intersection testing for use in rendering an image of a 3D scene. A hierarchical acceleration structure may be traversed by: traversing one or more upper levels of nodes of the hierarchical acceleration structure according to a first traversal technique, the first traversal technique being a depth-first traversal technique; and traversing one or more lower levels of nodes of the hierarchical acceleration structure according to a second traversal technique, the second traversal technique not being a depth-first traversal technique. Results of traversing the hierarchical acceleration structure are used for rendering the image of the 3D scene. The upper levels of the acceleration structure may be defined according to a spatial subdivision structure, whereas the lower levels of the acceleration structure may be defined according to a bounding volume structure. 1. A computer-implemented method of rendering an image of a 3D scene using a ray tracing system , the method comprising: traversing one or more upper levels of nodes of the hierarchical acceleration structure according to a first traversal technique, said first traversal technique being a depth-first traversal technique; and', 'traversing one or more lower levels of nodes of the hierarchical acceleration structure according to a second traversal technique, said second traversal technique not being a depth-first traversal technique; and, 'performing intersection testing comprising traversing a hierarchical acceleration structure byusing results of said traversing the hierarchical acceleration structure to render the image of the 3D scene.2. The method of wherein the second traversal technique is based on a breadth-first traversal technique claim 1 , wherein intersection testing of nodes with rays is scheduled based on availability of node data and ray data.3. (canceled)4. The method of wherein said one or more upper levels of nodes are at the top of the ...

Подробнее
17-01-2019 дата публикации

Hybrid Hierarchy of Bounding and Grid Structures for Ray Tracing

Номер: US20190019326A1
Принадлежит:

Methods and ray tracing units are provided for performing intersection testing for use in rendering an image of a 3-D scene. A hierarchical acceleration structure may be traversed by: traversing one or more upper levels of nodes of the hierarchical acceleration structure according to a first traversal technique, the first traversal technique being a depth-first traversal technique; and traversing one or more lower levels of nodes of the hierarchical acceleration structure according to a second traversal technique, the second traversal technique not being a depth-first traversal technique. Results of traversing the hierarchical acceleration structure are used for rendering the image of the 3-D scene. The upper levels of the acceleration structure may be defined according to a spatial subdivision structure, whereas the lower levels of the acceleration structure may be defined according to a bounding volume structure. 1. A computer-implemented method of generating a hierarchical acceleration structure and using the hierarchical acceleration structure for intersection testing as part of rendering an image of a 3D scene using a ray tracing system , the method comprising:receiving primitive data for primitives located in the 3D scene;determining nodes of the hierarchical acceleration structure based on the received primitive data, wherein one or more upper levels of nodes of the hierarchical acceleration structure are defined according to a spatial subdivision structure, and wherein one or more lower levels of nodes of the hierarchical acceleration structure are defined according to a bounding volume structure;storing the hierarchical acceleration structure for use in intersection testing; andperforming intersection testing, using the hierarchical acceleration structure, as part of rendering an image of the 3D scene.2. The method of wherein said determining nodes of the hierarchical acceleration structure comprises identifying which primitives are present within ...

Подробнее
16-01-2020 дата публикации

SYSTEMS AND METHODS FOR NAVIGATING A USER IN A VIRTUAL REALITY MODEL

Номер: US20200020159A1
Автор: YANG Yuke
Принадлежит: Ke.com (Beijing)Technology Co., Ltd

Systems and methods for navigating a user in a three-dimensional model of a property are disclosed. An exemplary system includes a storage device configured to store data associated with a plurality of point positions. Each point position corresponds to a camera position at which images of the property are captured by the camera. The system further includes at least one processor configured to determine connectivity among the point positions and determine a plurality of candidate routes connecting a first introduction position and a second introduction position. Each candidate route links a subset of the point positions based on the connectivity. The at least one processor is further configured to select a route from the candidate routes and sequentially display views corresponding to the point positions on the selected route. The views at each point position are rendered from the images captured at the camera position corresponding to the point position. 1. A system for navigating a user in a three-dimensional housing model of a property , comprising:a storage device configured to store data associated with a plurality of point positions, each point position corresponding to a camera position at which images of the property are captured by a camera; and determine connectivity among the point positions;', 'determine a plurality of candidate routes connecting a first introduction position and a second introduction position, and automatically select, for each candidate route, a subset of the point positions distinct from the first and second introduction positions based on the connectivity, each candidate route passing the selected subset of the point positions;', 'select a route from the candidate routes; and', 'sequentially display views corresponding to the point positions on the selected route, wherein the views at each point position are rendered from the images captured at the camera position corresponding to the point position., 'at least one processor ...

Подробнее
21-01-2021 дата публикации

METHOD AND APPARATUS FOR OBJECT DETECTION INTEGRATING 2D IMAGE RECOGNITION AND 3D SCENE RECONSTRUCTION

Номер: US20210019906A1
Принадлежит:

Example implementations described herein are directed to the projection of two dimensional (2D) image recognition results to three dimensional (3D) space by using 3D reconstructed data to realize accurate object counting, identification, scene re-organization, and so on in accordance with the desired implementation. Through the example implementations described herein, more accurate objection detection can be provided than regular 2D object detection. 1. A method , comprising:conducting raycasting on a plurality of images to generate a point cloud;executing two dimensional (2D) object detection on the plurality of images; determining a location of the object in three dimensional (3D) space from the point cloud;', classifying the object from the 2D object detection; and', 'placing a marker in the 3D space to represent the object based on the classifying., 'for the location not overlapping another marker], 'for the 2D object detection recognizing an object2. The method of claim 1 , wherein the plurality of images are associated with one or more of a position and acceleration of a device that captured the plurality of images;wherein the method further comprises projecting the 3D space for display on the device based on the one or more of the position and acceleration of the device.3. The method of claim 2 , further comprising claim 2 , for the point cloud not meeting a sufficient density claim 2 , projecting additional points from a database of previously raycast point clouds based on the one or more of the position and acceleration of the device.4. The method of claim 2 , further comprising:searching the 3D space for one or more vacant areas; andgenerating a recommendation for the device comprising a position and angle to conduct image capture based on the one or more vacant areas.5. The method of claim 2 , further comprising providing an interface to the device configured to add or remove one or more objects detected in the 2D object detection from the plurality of ...

Подробнее
21-01-2021 дата публикации

METHOD FOR AUTOMATICALLY GENERATING HIERARCHICAL EXPLODED VIEWS BASED ON ASSEMBLY CONSTRAINTS AND COLLISION DETECTION

Номер: US20210019956A1
Принадлежит:

A method for automatically generating hierarchical exploded views based on assembly constraints and collision detection, in which parts to be exploded are layered in explosion sequence according to a design result of the 3D assembly process planning, and the parts to be exploded in each layer are grouped based on the type and the disassembly direction; a feasible explosion direction of the parts in each layer is determined according to assembly constraints and collision detection; the explosion sequence and explosion direction of the parts in each layer are determined; and then the layered explosion is performed at a certain distance. Ball markers and a part-list are generated after all the parts are exploded. 1. A method for automatically generating hierarchical exploded views based on assembly constraints and collision detection , comprising:(1) layering a product or components to be exploded, and determining parts to be exploded in each layer;(2) grouping the parts to be exploded in each layer;wherein parts to be exploded in each group are the same or of the same type, and have the same disassembly direction;(3) performing a trial explosion on the parts to be exploded in each layer after grouped, to determine a feasible trial explosion direction of the parts to be exploded in each layer, thereby determining an explosion sequence and an explosion direction of the parts to be exploded in each layer, and performing a hierarchical explosion at a certain distance;wherein the trial explosion is performed through steps of: constructing an assembly constraint feature library; determining a trial explosion direction of a part to be exploded based on an assembly constraint feature thereof, moving the part a distance along the trial explosion direction thereof, and checking whether the part after moved interferes with other parts; if no interference occurs, it indicates that the part are able to be exploded in the trial explosion direction thereof in a current state, and ...

Подробнее
26-01-2017 дата публикации

METHOD AND SYSTEM FOR VORTICLE FLUID SIMULATION

Номер: US20170024922A1
Автор: Angelidis Alexis
Принадлежит:

The disclosure provides an approach for animating gases. A dynamic model is employed that accounts for stretching of gas vorticles in a stable manner, handles isolated particles and buoyancy, permits deformable boundaries of objects the gas flows past, and accounts for vortex shedding. The model models stretching of vorticity by applying a vector at the center of a stretched vorticle. High frequency eddies resulting from stretching may be filtered by unstretching the vorticle while preserving mean energy and enstrophy. To model boundary pressure, a boundary may be imposed by embedding into the gas the surface boundary and setting boundary conditions based on velocity of the boundary and the Green's function of the Laplacian. For computational efficiency, a vorticle cutoff proportional to a vorticle's size may be imposed. Vorticles determined to be similar based on a predefined criteria and distance threshold may be fused. 1. A method for rendering animation frames depicting gaseous matter , comprising , for each of a plurality of time steps:creating vorticles via at least one of vortex shedding, buoyancy, and emitting vorticles;determining a harmonic field which makes flow of the gas an ideal nonviscous flow;determining a velocity field which is a sum of the harmonic field and a field induced by one or more of the vorticles on boundary points placed on one or more moving objects;advecting visual particles, density particles, and the vorticles using the velocity field; andrendering the visual particles in an image frame.2. The method of claim 1 , wherein a basis of the vorticles permits stable stretching of the vorticles.3. The method of claim 2 , wherein stretching and squashing of the vorticles is performed in a manner that maintains constant enstrophy and mean energy.4. The method of claim 1 , wherein the velocity field is determined using a dynamic model that permits user control of at least one of external forces claim 1 , buoyancy claim 1 , rigid and deformable ...

Подробнее
25-01-2018 дата публикации

ENHANCING DOCUMENTS PORTRAYED IN DIGITAL IMAGES

Номер: US20180024974A1
Принадлежит:

The present disclosure is directed toward systems and methods that efficiently and effectively generate an enhanced document image of a displayed document in an image frame captured from a live image feed. For example, systems and methods described herein apply a document enhancement process to a displayed document in an image frame that result in an enhanced document image that is cropped, rectified, un-shadowed, and with dark text against a mostly white background. Additionally, systems and method described herein determine whether a stored digital content item includes a displayed document. In response to determining that a stored digital content item does include a displayed document, systems and methods described herein generate an enhanced document image of a displayed document included in the stored digital content item. 1. A computing device comprising:at least one processor; anda non-transitory computer-readable medium storing instructions thereon that, when executed by the at least one processor, cause the computing device to:detect a displayed document within a live image feed associated with the computing device;based on detecting the displayed document within the live image feed, generate an enhanced document image corresponding to the displayed document; andprovide, for presentation on a display of the computing device, the enhanced document image.2. The computing device as recited in claim 1 , wherein the non-transitory computer-readable medium further comprises instructions thereon that claim 1 , when executed by the at least one processor claim 1 , cause the computing device to:capture an image frame from the live image feed, the image frame comprising the displayed document; andwherein generating the enhanced document comprises modifying the image frame with respect to the displayed document within the image frame.3. The computing device as recited in claim 2 , wherein modifying the image frame comprises:detecting, without receiving user input, ...

Подробнее
10-02-2022 дата публикации

SCALING METHOD AND APPARATUS, DEVICE AND MEDIUM

Номер: US20220044355A1
Автор: Wu Di
Принадлежит:

The present invention relates to a scaling method, including: for each of coordinate axis directions of a complex object: assigning a scale mode to each sub-object in the complex object in the direction; for each cross section of the complex object perpendicular to the direction, calculating a scale ratio limit of the cross section; combining adjacent cross sections with the same scale ratio to obtain a segmented scale ratio range of the complex object in the direction; according to an adjustment target value of the complex object in the direction, calculating the segmented scale ratio of the complex object in the direction; and according to the scale ratio of each sub-object in each direction, calculating a new position range of each sub-object and adjusting a size of the complex object. In addition, the present invention further relates to a scaling apparatus, a device, and a medium. 1. A scaling method , comprising:for each of coordinate axis directions of a complex object:assigning a scale mode to each sub-object in the complex object in the direction, wherein the scale mode comprises: proportional stretching, non-stretching and unit repetition;for each cross section of the complex object perpendicular to the direction, calculating a scale ratio limit of the cross section;combining adjacent cross sections with the same scale ratio to obtain a segmented scale ratio range of the complex object in the direction;according to an adjustment target value of the complex object in the direction, calculating a segmented scale ratio of the complex object in the direction; andaccording to a scale ratio of each sub-object in each direction, calculating a new position range of each sub-object and adjusting a size of the complex object.2. The scaling method according to claim 1 , wherein for each cross section of the complex object perpendicular to the direction claim 1 , the calculating a scale ratio limit of the cross section comprises:determining whether the cross section ...

Подробнее
26-01-2017 дата публикации

HIGH-RESOLUTION CCTV PANORAMIC CAMERA DEVICE

Номер: US20170026573A1
Автор: LEE Young Soo
Принадлежит:

A high-resolution CCTV panoramic camera device is provided. The high-resolution CCTV panoramic camera device includes a plurality of lenses; a plurality of image sensors, installed to correspond to the plurality of lenses, for taking images incident through the plurality of lenses; a video signal conversion unit comprising a plurality of video processors for receiving a plurality of image signals having specific resolution, taken by and output from the plurality of image sensors, and for converting the plurality of image signals into video signals; a panoramic image synthesis unit for synthesizing the plurality of adjacent video signals output from the plurality of video processors of the video signal conversion unit to form a panoramic video image; and a video image output unit for converting the panoramic video image output from the panoramic image synthesis unit into a compressed format to be output. 1. A high-resolution CCTV panoramic camera device , comprising:a plurality of lenses;a plurality of image sensors, installed to correspond to the plurality of lenses, for taking a plurality of images incident through the plurality of lenses; receiving a plurality of image signals having specific resolution, taken by and output from the plurality of image sensors, and', 'converting the plurality of image signals output from the plurality of image sensors into a plurality of video signals, respectively;, 'a video signal conversion unit comprising a plurality of video processors fora panoramic image synthesis unit for obtaining a panoramic video image by synthesizing the plurality of adjacent video signals output from the plurality of video processors of the video signal conversion unit; anda video image output unit for converting the panoramic video image output from the panoramic image synthesis unit into a compressed format to be output.2. The high-resolution CCTV panoramic camera device of claim 1 ,wherein the plurality of lenses are arranged to observe an area with ...

Подробнее
29-01-2015 дата публикации

DISPLAY DEVICE, CONTROL METHOD, PROGRAM AND STORAGE MEDIUM

Номер: US20150029214A1
Автор: Kumagai Shunichi
Принадлежит:

A display device superimposes and displays guide information on an actual image captured by a camera. The display device includes a specifying unit and a display control unit. The specifying unit specifies an overlapping part between guide information and a building in the actual image based on an image-capturing position, position information of the building existing in an image-capturing range of the camera, building shape information of the building, and position information of a facility or a road corresponding to the guide information. The display control unit superimposes and displays the guide information except for a cut-off part on the actual image. The cut-off part herein indicates the overlapping part where the building is to be displayed on a front side of the guide information. 1. A display device superimposing and displaying guide information on an actual image captured by a camera , comprising:a specifying unit configured to specify an overlapping part between guide information and a building in the actual image based on an image-capturing position, position information of the building existing in an image-capturing range of the camera, building shape information of the building, and position information of a facility or a road corresponding to the guide information; anda display control unit configured to superimpose and display the guide information except for a cut-off part on the actual image, the cut-off part indicating the overlapping part where the building is to be displayed on a front side of the guide information,wherein the display control unit superimposes and displays, on the actual image, a mark indicating a facility as the guide information at a position corresponding to the facility in the actual image, andwherein the display control unit omits the cut-off part at least regarding the mark corresponding to the facility in the actual image serving as a landmark of route guide.2. The display device according to claim 1 ,wherein the ...

Подробнее
23-01-2020 дата публикации

AUGMENTED REALITY (AR) BASED FAULT DETECTION AND MAINTENANCE

Номер: US20200026257A1
Принадлежит: Accenture Global Solutions Limited

An AR based fault detection and maintenance system analyzes real-time video streams from a remote user device to identify a specific context level at which a user is to handle an equipment and provides instructions corresponding to the specific context level. The instructions enable generating AR simulations that guide the user in executing specific operations including repairs on faulty components of the equipment. The received video stream is initially analyzed to identify a particular equipment which is to be handled by the user. Fault prediction procedures are executed to identify faults associated with the equipment. The instructions to handle the faults are transmitted to the user device as AR simulations that provide step-by-step simulations that enable the user to execute operations as directed by the instructions. 1. An Augmented Reality (AR)-based fault detection and maintenance system comprising:at least one processor;a non-transitory computer readable medium storing machine-readable instructions that cause the at least one processor to:receive real-time video feed from a remote user device, the real-time video feed transmitting video of a facility including equipment;identify from the real-time video feed, using a trained AI-based object identifier, a faulty equipment to be worked on by a user associated with the user device;further obtain using the trained AI-based object identifier, an input image from the real-time video feed, the input image including a component to be repaired within the faulty equipment;classify the input image into one of a plurality of fault classes using an AI-based fault identifier;detect a fault associated with the component in the input image using historical data specific to the equipment and further based on weights associated with attributes of the component;determine serially, one of a plurality of context levels at which the fault has been detected based at least on the real-time video feed; andenable providing a ...

Подробнее
02-02-2017 дата публикации

VISION DISPLAY SYSTEM FOR VEHICLE

Номер: US20170028920A1
Автор: Lynam Niall R.
Принадлежит:

A vision display system for a vehicle includes a rear backup camera and display device that includes a video display screen. Upon an engine of the vehicle being started after initial ignition on of the vehicle, initialization of the display device is delayed or overridden upon shifting the vehicle transmission into reverse gear to commence a reversing maneuver in order to give priority to display by the video display screen of video images derived from image data captured by the rear backup camera during that reversing maneuver. Within two seconds of that shifting of the vehicle transmission of the vehicle into reverse gear, the display device displays on the video display screen video images derived from image data captured by the rear backup camera. The video images may be displayed for no more than 10 seconds after the vehicle transmission is shifted out of reverse gear. 1. A vision display system for a vehicle , said vision display system comprising:a rear backup camera disposed at a rear portion of a vehicle equipped with said vision display system;said rear backup camera having a field of view exterior and rearward of the equipped vehicle;a display device comprising a video display screen disposed in the interior cabin of the equipped vehicle and viewable by a driver of the equipped vehicle, said video display screen having a diagonal dimension of at least 5 inches;wherein said display device undergoes initialization at initial ignition on of the equipped vehicle;wherein, upon an engine of the equipped vehicle being started after initial ignition on of the equipped vehicle, initialization of said display device is delayed or overridden upon shifting of a vehicle transmission of the equipped vehicle into reverse gear to commence a reversing maneuver in order to give priority to display by said video display screen of video images derived from image data captured by said rear backup camera during that reversing maneuver;wherein, within two seconds of that ...

Подробнее
28-01-2021 дата публикации

SYSTEMS AND METHODS FOR SHARING IMAGE DATA EDITS

Номер: US20210027510A1
Принадлежит: PicsArt, Inc.

Aspects presented herein include systems and methods for editing images (still or video images). In embodiments, edit information is captures and associate with an edited image (e.g., a “remix” image). The remix image and its associated edit information may be readily shared with other users. In embodiments, users can see the creator's editing steps used to achieve the end result via interactive “cards” that may be displayed with the remix image. In embodiments, a player application uses the captured edit information to allow users to “replay” some or all of those edits on an image. The remix-replay embodiments provide: (1) unique ways for capturing edits and parameter adjustments for being applied onto a different image; (2) unique ways for observing how the image was edited for learning how to replicate edits; and (3) unique ways for applying some or all of those edits during editing. 1. A computer-implemented method comprising: (1) comprises a plurality of tools for editing an original image to create an edited image; and', '(2) captures replay edit data comprising a sequence of edits used in generating the edited image, in which the replay edit data comprises for each edit in the sequence of edits, an indicator of which tool was used for the edit and a set of one or more tool parameters applied for the edit;, 'providing an image editing application thatreceiving the original image, the edited image, and the reply edit data;posting the edited image to be accessible by a third party and an indicator that the posted edited image has replay edit data associated with it that is available to the third party; andsupplying to an image editing application associated with the third party the replay edit data to facilitate replicating, at least in part, the sequence of edits on a second original image.2. The computer-implemented method of wherein the replay edit data further comprises:a sequence of images depicting a progression of the original image to the edited image as ...

Подробнее
01-02-2018 дата публикации

ELECTRONIC DEVICE AND METHOD FOR OUTPUTTING THUMBNAIL CORRESPONDING TO USER INPUT

Номер: US20180032238A1
Принадлежит:

An electronic device includes a display, a processor and a memory that stores an image file having image data. The image data includes at least one object and metadata. The metadata includes information about an area corresponding to the at least one object and identification information of the at least one object. The processor outputs the area of the image file, which includes the identification information corresponding to a user input as a thumbnail of the image file, in the display in response to the user input. 1. An electronic device comprising:a display;a memory configured to store an image file including image data, which includes at least one object and metadata; anda processor,wherein the metadata includes information about an area corresponding to the at least one object and identification information of the at least one object, andwherein the processor is configured to:output the area of the image file, which includes the identification information corresponding to a user input, as a thumbnail of the image file in the display in response to the user input.2. The electronic device of claim 1 , wherein the memory stores feature information indicating a feature of the at least one object claim 1 , andwherein the processor adjusts a size of the area based on a score indicating a degree of coincidence of the identification information and the feature information.3. The electronic device of claim 2 , wherein the processor is configured to:output the area as the thumbnail of the image file in the display if the score is not less than a preset value.4. The electronic device of claim 1 , wherein the processor classifies the at least one object as a first category or a second category based on the identification information.5. The electronic device of claim 4 , wherein the processor extends a size of the area if the at least one object is included in the first category claim 4 , and the processor reduces the size of the area if the at least one object is included ...

Подробнее
02-02-2017 дата публикации

MULTI-FORMAT CALENDAR DIGITIZATION

Номер: US20170032558A1
Принадлежит:

Technologies for digitizing a physical version of a calendar include a mobile computing device. The mobile computing device receives a source image representative of a physical version of a calendar. The source image is cropped to an identified textual region of interest to generate a cropped source image. The mobile computing device analyzes the cropped source image to identify time management data included therein. A calendar event is generated based at least in part on the identified time management data. The mobile computing device stores the generated calendar event in a local calendar database. Other embodiments are described and claimed. 1. A method for digitizing a physical version of a calendar , the method comprising:receiving, by a mobile computing device, a source image representative of a physical version of a calendar;identifying, by the mobile computing device, a textual region of interest within the source image;cropping, by the mobile computing device, the source image to the textual region of interest to generate a cropped source image;analyzing, by the mobile computing device, the cropped source image to identify time management data included therein;generating, by the mobile computing device, a calendar event based at least in part on the identified time management data; andstoring, by the mobile computing device, the generated calendar event in a local calendar database of the mobile computing device.2. The method of claim 1 , wherein receiving the source image comprises capturing an image representative of the physical version of the calendar with a camera of the mobile computing device.3. The method of claim 1 , wherein receiving the source image comprises receiving the source image from a different computing device.4. The method of claim 1 , wherein identifying the textual region of interest within the source image comprises determining one or more areas within the source image that include text.5. The method of claim 1 , further comprising: ...

Подробнее
02-02-2017 дата публикации

Simulated Transparent Device

Номер: US20170032559A1
Принадлежит:

Methods and apparatuses pertaining to a simulated transparent device may involve capturing a first image of a surrounding of the display with a first camera, as well as capturing a second image of the user with a second camera. The methods and apparatuses may further involve constructing a see-through window of the first image, wherein, when presented on the display, the see-through window substantially matches the surrounding and creates a visual effect with which at least a portion of the display is substantially transparent to the user. The methods and apparatuses may further involve presenting the see-through window on the display. The constructing of the see-through window may involve computing a set of cropping parameters, a set of deforming parameters, or a combination of both, based on a spatial relationship among the surrounding, the display, and the user. 1. An apparatus , comprising:a memory configured to store one or more sets of instructions; and receiving data of an image of a surrounding of a display;', 'constructing a see-through window of the image, wherein, when presented on the display, the see-through window substantially matches the surrounding and creates a visual effect with which at least a portion of the display is substantially transparent to a user; and', 'presenting the see-through window on the display., 'a processor coupled to execute the one or more sets of instructions in the memory, the processor, upon executing the one or more sets of instructions, configured to perform operations comprising2. The apparatus of claim 1 , wherein:the image comprises a viewing angle, the image captured by a camera with the viewing angle, and determining a first spatial relationship denoting a location of the surrounding with respect to the display;', 'determining a second spatial relationship denoting a location of the user with respect to the display;', 'computing a set of cropping parameters, a set of deforming parameters, or both, based on the first ...

Подробнее
04-02-2016 дата публикации

ELECTRONIC APPARATUS AND METHOD

Номер: US20160035062A1
Автор: YAMAMOTO Koji
Принадлежит:

According to one embodiment, an electronic apparatus includes a processor. The processor is configured to display a first image and a quadrangular selection frame on a display region of a display screen. The processor is configured to deform the selection frame based on a deform selection input. The processor is configured to reduce or enlarge the first image based on a position of a first point on the selection frame moved by the deformation. 1. An electronic apparatus comprising:a processor configured to:display a first image and a quadrangular selection frame on a display region of a display screen;deform the selection frame based on a deform selection input; andreduce or enlarge the first image based on a position of a first point on the selection frame moved by the deformation.2. The electronic apparatus of claim 1 , whereinthe selection frame is configured to select a region in the first image.3. The electronic apparatus of claim 1 , whereinthe processor is configured to reduce the first image when a first distance between the first point and a periphery of the display region is less than a first set value.4. The electronic apparatus of claim 1 , whereinthe first point is a first vertex of four vertices of the selection frame.5. The electronic apparatus of claim 4 , whereinthe processor is configured to reduce the first image when the first distance between the first vertex and a periphery of the display region is less than a first set value.6. The electronic apparatus of claim 1 , whereinthe processor is configured to enlarge the first image when a second distance between the first point and a center of the selection frame is less than a second set value.7. The electronic apparatus of claim 1 , whereinthe first point is a first vertex of four vertices of the selection frame, andthe processor is configured to enlarge the first image when a second distance between the first vertex and a diagonal line connecting two vertices of the selection frame adjacent to ...

Подробнее
01-02-2018 дата публикации

GRAPHICS PROCESSING SYSTEMS

Номер: US20180033191A1
Принадлежит: ARM LIMITED

In a graphics processing system, a bounding volume () representative of the volume of all or part of a scene to be rendered is defined. Then, when rendering an at least partially transparent object () that is within the bounding volume () in the scene, a rendering pass for part or all of the object () is performed in which the object () is rendered as if it were an opaque object. In the rendering pass, for at least one sampling position () on a surface of the object (), the colour to be used to represent the part of the refracted scene that will be visible through the object () at the sampling position () is determined by using a view vector () from a viewpoint position () for the scene to determine a refracted view vector () for the sampling position (), determining the position on the bounding volume () intersected by the refracted view vector (), using the intersection position () to determine a vector () to be used to sample a graphics texture that represents the colour of the surface of the bounding volume () in the scene, and using the determined vector () to sample the graphics texture to determine a colour for the sampling position () to be used to represent the part of the refracted scene that will be visible through the object () at the sampling position () and any other relevant information encoded in one or more channels of the texture. 1. A method of operating a graphics processing system when rendering a scene for output , in which a bounding volume representative of the volume of all or part of the scene to be rendered is defined; the method comprising:when rendering an at least partially transparent object that is within the bounding volume in the scene:performing a rendering pass for some or all of the object in which the object is rendered as if it were an opaque object; andin the rendering pass: using a view vector from a viewpoint position for the scene to determine a refracted view vector for the sampling position;', 'determining the position on ...

Подробнее
31-01-2019 дата публикации

METHOD AND APPARATUS FOR DISPLAYING ROAD NAMES, AND STORAGE MEDIUM

Номер: US20190033091A1
Автор: Deng Jian, Zhang Jing
Принадлежит:

A method and apparatus for displaying road names and a storage medium, the method comprises: performing collision detecting on and loading planned road names according to planned road name data and priority levels of the planned road names included in a navigation route; and performing collision detecting on and loading annotations of other map contents except the navigation route in a navigation map. The method and apparatus for displaying road names and the storage medium are used to enable a user to view the planned road names of the navigation route on an overview page of the navigational route, thereby improving guidance of map display for navigation. 1. A method for displaying road names , comprising:performing collision detecting on and loading planned road names according to planned road name data and priority levels of the planned road names included in a navigation route; andperforming collision detecting on and loading annotations of other map contents, wherein the other map contents are map contents in a navigation map except the navigation route,wherein the method is performed by at least one hardware processor.2. The method according to claim 1 , wherein the performing collision detecting on and loading planned road names according to planned road name data and priority levels of the planned road names included in a navigation route comprises:acquiring the planned road name data and the priority levels of the planned road names included in the navigation route, the planned road name data including the planned road names and coordinate regions of the planned road names; andperforming collision detecting on and loading the planned road names according to the coordinate regions of the planned road names and the priority levels of the planned road names.3. The method according to claim 2 , wherein the performing collision detecting on and loading the planned road names according to the coordinate regions of the planned road names and the priority levels of ...

Подробнее
17-02-2022 дата публикации

VIRTUAL LENS SIMULATION FOR VIDEO AND PHOTO CROPPING

Номер: US20220051365A1
Принадлежит:

In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted. 1. A system for simulating image distortion of a virtual lens in a video , the system comprising:one or more processors; and accessing input images, the input images including fields of view of a scene, the input images depicting the scene with an input lens distortion centered in the fields of view based on lens characteristics of a lens through which the input images are captured;', 'selecting reduced fields of view of the scene smaller than the fields of view of the input images, the reduced fields of view including lens distortion effects as a function of the input lens distortion present in the fields of view of the input images, positions of the reduced fields of view within the fields of view of the input images, and size of the reduced fields of view, wherein a first reduced field of view for a first image has a different lens distortion effect than a second reduced field of view for a second image based on different positions of the first reduced field of view and the second reduced field of view; and', 'generating output images based on the lens distortion effects in the reduced fields of view and a desired lens distortion, the output images including portions of the input images within the reduced fields of view, the desired lens distortion being consistent with the lens characteristics of the lens, wherein generation of the output images includes remapping of the input lens ...

Подробнее
17-02-2022 дата публикации

Method for Compressing Image Data Having Depth Information

Номер: US20220051445A1
Автор: Hillman Peter M.
Принадлежит:

An image dataset is compressed by combining depth values from pixel depth arrays, wherein combining criteria are based on object data and/or depth variations of depth values in the first pixel image value array and generating a modified image dataset wherein a first pixel image value array represented in a received image dataset by the first number of image value array samples is in turn represented in the modified image dataset by a second number of compressed image value array samples with the second number being less than or equal to the first number. 1. A computer-implemented method for image compression , under control of one or more computer systems configured with executable instructions , the method comprising:obtaining an image dataset in computer-readable form, wherein image data in the image dataset comprises a plurality of pixel image value arrays, wherein a first pixel image value array having a first number of image value array samples each having an image value, a depth value, and an association with an associated pixel position;determining, for the first number of image value array samples, a compressed image;determining, for the first number of image value array samples, a compressed image value array comprising a second number of compressed image value array samples, wherein the second number of compressed image value array samples is less than or equal to the first number of image value array samples and wherein compressed image value array samples are computed based on (1) the first number of image value array samples and (2) combining criteria, wherein the combining criteria are based on object data and/or depth variations of depth values in the first pixel image value array taking into account an error threshold; andgenerating a modified image dataset wherein the first pixel image value array represented in the image dataset by the first number of image value array samples is represented in the modified image dataset by the second number of ...

Подробнее
17-02-2022 дата публикации

APPARATUS AND METHOD FOR EFFICIENT GRAPHICS PROCESSING INCLUDING RAY TRACING

Номер: US20220051467A1
Принадлежит:

Apparatus and method for efficient graphics processing including ray tracing. For example, one embodiment of a graphics processor comprises: execution hardware logic to execute graphics commands and render images; an interface to couple functional units of the execution hardware logic to a tiled resource; and a tiled resource manager to manage access by the functional units to the tiled resource, a functional unit of the execution hardware logic to generate a request with a hash identifier (ID) to request access to a portion of the tiled resource, wherein the tiled resource manager is to determine whether a portion of the tiled resource identified by the hash ID exists, and if not, to allocate a new portion of the tiled resource and associate the new portion with the hash ID. 1. A graphics processor comprising:execution hardware logic to execute graphics commands and render images;an interface to couple functional units of the execution hardware logic to a tiled resource; anda tiled resource manager to manage access by the functional units to the tiled resource, a functional unit of the execution hardware logic to generate a request with a hash identifier (ID) to request access to a portion of the tiled resource,wherein the tiled resource manager is to determine whether a portion of the tiled resource identified by the hash ID exists, and if not, to allocate a new portion of the tiled resource and associate the new portion with the hash ID.2. The graphics processor of further comprising:the tiled resource manager to evict an existing portion of the tiled resource and to reallocate the existing portion as the new portion associated with the hash ID.3. The graphics processor of wherein the tiled resource manager is to implement a least recently used (LRU) eviction policy to evict an existing portion of the tiled resource used least recently.4. The graphics processor of wherein if a portion of the tiled resource identified by the hash ID exists claim 1 , then the tiled ...

Подробнее
09-02-2017 дата публикации

System and method for local three dimensional volume reconstruction using a standard fluoroscope

Номер: US20170035379A1
Принадлежит: COVIDIEN LP

A system and method for constructing fluoroscopic-based three dimensional volumetric data from two dimensional fluoroscopic images including a computing device configured to facilitate navigation of a medical device to a target area within a patient and a fluoroscopic imaging device configured to acquire a fluoroscopic video of the target area about a plurality of angles relative to the target area. The computing device is configured to determine a pose of the fluoroscopic imaging device for each frame of the fluoroscopic video and to construct fluoroscopic-based three dimensional volumetric data of the target area in which soft tissue objects are visible using a fast iterative three dimensional construction algorithm.

Подробнее
31-01-2019 дата публикации

SYSTEMS AND METHODS TO ALTER PRESENTATION OF VIRTUAL RENDITION BASED ON REAL WORLD OBJECT

Номер: US20190035124A1
Принадлежит:

In one aspect, a device includes at least one processor and storage accessible to the at least one processor. The storage bears instructions executable by the at least one processor to present virtual objects of a virtual rendition on a display accessible to the processor and alter presentation of the virtual rendition based on the existence of a real-world object identified by the device.

Подробнее
31-01-2019 дата публикации

BODY INFORMATION ANALYSIS APPARATUS CAPABLE OF INDICATING BLUSH-AREAS

Номер: US20190035126A1
Принадлежит:

A body information analysis apparatus () capable of indicating blush areas (A-A)is disclosed and includes: an image capturing module () for capturing an external image; a processor () electrically connected to the image capturing module (), stored multiple face types and multiple indicating processes respectively corresponding to each of the face types, the processor () determines a face type of a face when the face is recognized in the external image, and executes one of the multiple indicating processes corresponding to the determined face type, so as to indicate blush areas (A-A) on the face; and, a display module () electrically connected to the processor (), for displaying the face in company with the indicated blush areas (A-A). 1. A body information analysis apparatus capable of indicating blush areas , comprising:{'b': '12', 'an image capturing module (), for capturing an external image;'}{'b': 10', '12', '10, 'a processor () electrically connected with the image capturing module (), recorded multiple face types and multiple indicating processes respectively corresponding to each of the face types, the processor () recognizing the external image, and performing positioning actions to each facial feature of a face and determining a face type of the face once the face is recognized from the external image,'}{'b': 10', '1', '4, 'wherein the processor () executes a corresponding one of the indicating processes according to the determined face type of the recognized face for indicating blush areas (A-A) on the face once the face is determined as one of the multiple recorded face types; and'}{'b': 111', '10', '1', '4', '1', '4, 'a display module (), electrically connected with the processor (), displaying the face in company with the indicated blush areas (A-A), wherein the displayed blush areas (A-A) are overlapped with the displayed face.'}210. The body information analysis apparatus in claim 1 , wherein the processor () comprises:{'b': '101', 'a face ...

Подробнее
31-01-2019 дата публикации

USER INTERFACE APPARATUS FOR VEHICLE AND VEHICLE

Номер: US20190035127A1
Автор: CHOI Sunghwan
Принадлежит: LG ELECTRONICS INC.

A user interface apparatus for a vehicle including an interface unit; a display; a camera configured to capture a forward view image of the vehicle and a processor configured to display a cropped area of the forward view image on the display in which an object is present in displayed cropped area, display a first augmented reality (AR) graphic object overlaid onto the object present in the displayed cropped area, change the cropped area based on driving situation information received through the interface unit, and change the first AR graphic object to a second AR object based on the driving situation information. 1. A user interface apparatus for a vehicle , comprising:an interface unit;a display;a camera configured to capture a forward view image of the vehicle; anda processor configured to:display a cropped area of the forward view image on the display in which an object is present in displayed cropped area,display a first augmented reality (AR) graphic object overlaid onto the object present in the displayed cropped area,change the cropped area based on driving situation information received through the interface unit, andchange the first AR graphic object based on the driving situation information.2. The user interface apparatus according to claim 1 , wherein the processor is further configured to:set a center point in the cropped area; andchange a center point in the forward view image based the driving situation information.3. The user interface apparatus according to claim 1 , wherein the processor is further configured to:receive steering angle information through the interface unit, andbased on the steering angle information, change the cropped area on the forward view image by moving the cropped area leftward or rightward on the forward view image.4. The user interface apparatus according to claim 1 , wherein the processor is further configured to:receive yaw angle information through the interface unit, andbased on the yaw angle information, change the ...

Подробнее
31-01-2019 дата публикации

IMAGE PROCESSING METHODS AND DEVICES

Номер: US20190035134A1
Автор: JIE Wei Bo

An image processing method, system, and apparatus are provided. The method includes obtaining an interaction area in a current image frame in which an interaction space of a first object is intersected with a first plane on which a second object is located, a target object in the second object being located in the interaction area. A base image of a water wave animation corresponding to the interaction area is generated, where plural ripples are displayed in the base image. By using a first target ripple of the plural ripples, the target object is moved to a position that is in the current image frame and that corresponds to a ripple position of the first target ripple in the base image, the first target ripple corresponding to the target object. 1. An image processing method , comprising:obtaining, by at least one processor, an interaction area in a current image frame in which an interaction space of a first object is intersected with a first plane on which a second object is located, a target object in the second object being located in the interaction area;generating, by the at least one processor, a base image of a water wave animation corresponding to the interaction area, a plurality of ripples being displayed in the base image; andmoving, by the at least one processor by using a first target ripple of the plurality of ripples, the target object to a position that is in the current image frame and that corresponds to a ripple position of the first target ripple in the base image, the first target ripple corresponding to the target object.2. The method according to claim 1 , wherein the moving claim 1 , by using a first target ripple of the plurality of ripples claim 1 , the target object to a position that is in the current image frame and that corresponds to a ripple position of the first target ripple in the base image comprises:adjusting the target object from a first modality to a second modality based on the base image of the water wave animation, ...

Подробнее
31-01-2019 дата публикации

Zoom control device, zoom control method, control program, and imaging apparatus equipped with zoom control device

Номер: US20190037143A1
Автор: Akihiro Tsubusaki
Принадлежит: Canon Inc

An apparatus for recording an image output from a sensor based on an instruction from a user includes a detection unit configured to detect a subject from an image output from the sensor, and a control unit configured to set a parameter based on the detected subject and to perform automatic optical zoom control using the parameter. In a first mode, the detection unit detects a subject from a first image which is acquired after a predetermined condition is satisfied after the instruction, and the control unit performs the automatic optical zoom control using the parameter set based on the subject detected from the first image.

Подробнее
04-02-2021 дата публикации

VIRTUAL LENS SIMULATION FOR VIDEO AND PHOTO CROPPING

Номер: US20210035261A1
Принадлежит:

In a video capture system, a virtual lens is simulated when applying a crop or zoom effect to an input video. An input video frame is received from the input video that has a first field of view and an input lens distortion caused by a lens used to capture the input video frame. A selection of a sub-frame representing a portion of the input video frame is obtained that has a second field of view smaller than the first field of view. The sub-frame is processed to remap the input lens distortion to a desired lens distortion in the sub-frame. The processed sub-frame is the outputted. 1. A system that simulates image distortion of a virtual lens in a video , the system comprising:one or more processors; and accessing input images, the input images including fields of view of a scene, the input images depicting the scene with an input lens distortion centered in the fields of view based on lens characteristics of a lens through which the input images are captured, wherein the lens characteristics of the lens cause straight lines in the scene to appear as curved lines in at least a portion of the input images or cause lines of same length to appear as lines of different lengths in different portions of the input images;', 'selecting reduced fields of view of the scene smaller than the fields of view of the input images, the reduced fields of view including lens distortion effects as a function of the input lens distortion present in the fields of view of the input images, positions of the reduced fields of view within the fields of view of the input images, and size of the reduced fields of view; and', 'generating output images based on the lens distortion effects in the reduced fields of view and a desired lens distortion, the output images including portions of the input images within the reduced fields of view, the desired lens distortion being consistent with the lens characteristics of the lens, wherein generation of the output images includes remapping of the input ...

Подробнее
08-02-2018 дата публикации

INFORMATION SURFACING WITH VISUAL CUES INDICATIVE OF RELEVANCE

Номер: US20180039394A1
Принадлежит:

A user interface through which information is proactively provided utilizes visual cues indicative of the relevance of the information that is being proactively provided. Such visual cues include sizing, color, intensity, movement, and other like visual attributes. A single discrete visual element proactively presents information to the user. The user is provided with the opportunity to define discrete events, whereby information associated with such events is presented through other discrete elements. The physical orientation of such discrete elements indicates relationships between elements. Ranking functionality identifies more immediately relevant information and the rankings of related elements are increased based upon other, contextual information with which such information is associated, and on which the importance of such information is based. Information is surfaced to provide a vector through which the user responds or utilizes such surfaced information independently of specific application programs having discrete informational focus. 120-. (canceled)21. A method for notifying a user via a graphical user interface that is physically generated on a hardware display device by a computing device , the method comprising the steps of:obtaining a first notification;generating, as part of the graphical user interface, a first discrete visual element to appear as if it is floating before the user, the first discrete visual element comprising the first notification;obtaining a second notification differing from the first notification; andgenerating, as part of the graphical user interface, and proximate to the first discrete visual element, a second discrete visual element to appear as if it is floating before the user, the second discrete visual element comprising the second notification;wherein a visual proximity of the second discrete visual element to the first discrete visual element in the graphical user interface is indicative of a relationship between the ...

Подробнее
11-02-2016 дата публикации

Image Sensor Read Window Adjustment for Multi-Camera Array Tolerance

Номер: US20160042493A1
Принадлежит:

Multiple cameras are arranged in an array at a pitch, roll, and yaw that allow the cameras to have adjacent fields of view such that each camera is pointed inward relative to the array. The read window of an image sensor of each camera in a multi-camera array can be adjusted to minimize the overlap between adjacent fields of view, to maximize the correlation within the overlapping portions of the fields of view, and to correct for manufacturing and assembly tolerances. Images from cameras in a multi-camera array with adjacent fields of view can be manipulated using low-power warping and cropping techniques, and can be taped together to form a final image. 1. A method comprising:accessing image data captured by an image sensor in each of a plurality of cameras in a camera array, each image sensor comprising an image sensor window and a read window smaller than and located within the image sensor window, the image data from each image sensor representative of light incident upon the read window during capture, wherein at least a portion of a field of view of a first camera is common with a portion of a field of view of a second camera;determining an amount of correlation between first image data representative of the portion of the field of view of the first camera and second image data representative of the portion of the field of view of the second camera; andadjusting the location of one or more of the read window of the image sensor of the first camera and the read window of the image sensor of the second camera based on the determined amount of correlation.2. The method of claim 1 , wherein the camera array is a 2×1 camera array including two cameras.3. The method of claim 1 , wherein the camera array is a 2×2 camera array including four cameras.4. The method of claim 1 , wherein the common portion of the fields of view includes a common object.5. The method of claim 4 , wherein adjusting the location of one or more of the read window of the image sensor of the ...

Подробнее
11-02-2016 дата публикации

METHOD FOR MONITORING WATER LEVEL OF A WATER BODY AND SYSTEM FOR IMPLEMENTING THE METHOD

Номер: US20160042532A1
Принадлежит:

In a method for monitoring water level of a water body, a monitoring system is configured to: capture a current image that has a portion of the water body, and a remaining portion aside from the portion of the water body; process the current image into a processed image that includes a water body region corresponding to the portion of the water body, and a background region corresponding to the remaining portion of the current image; mark, on the processed image, a plurality of virtual alert points according to a predetermined water level of the water body; determine whether at least one of the virtual alert points is located within the water body region of the processed image; and generate a monitoring result according to the determination thus made. 1. A method for monitoring water level of a water body , said method to be implemented using a monitoring system that includes an image capturing module and an image processing module , said method comprising the following steps of:(a) capturing, using the image capturing module, a current image that has a portion of the water body and a remaining portion aside from the portion of the water body;(b) processing, by the image processing module, the current image into a processed image that includes a water body region corresponding to the portion of the water body, and a background region corresponding to the remaining portion of the current image;(c) marking on the processed image, by the image processing module, a plurality of virtual alert points according to a predetermined water level of the water body;(d) determining, by the image processing module, whether at least one of the virtual alert points is located within the water body region of the processed image; and(e) generating, by the image processing module, a monitoring result according to the determination made in step (d).2. The method of claim 1 , wherein claim 1 , in step (b) claim 1 , the image processing module processes the current image into the ...

Подробнее
08-02-2018 дата публикации

METHOD AND SYSTEM FOR DETERMINING COMPLEX SITUATION ACCORDING TO TIME FLOW OF EVENTS OCCURRING IN EACH DEVICE

Номер: US20180039839A1
Принадлежит: SK Holdings Co., Ltd.

Provided is a method and system for determining a complex situation according to the time flow of events occurring in each device. In a method for determining a complex situation according to an embodiment of the present invention, a current situation is determined by detecting events occurring in devices and referring to a complex situation determination rule. The complex situation determination rule is a rule in which the current situation is mapped according to the events, which have occurred in the devices, and time intervals between the events. The present invention enables a more reliable determination of the current situation by complexly considering not only the events occurring in the devices but also time intervals between the events. 1. A method for determining a complex situation , the method comprising the steps of:detecting events occurring in devices; anddetermining a current situation by referring to a result of the detecting in the step of detecting and a complex situation analytic rule, andwherein the complex situation analytic rule is a rule which maps a current situation according to the events occurring in the devices and time intervals between the events.2. The method of claim 1 , wherein the complex situation analytic rule comprises a first condition and a first situation which is mapped onto the first condition claim 1 , andwherein the first condition comprises a first detection condition in which a first event is detected in a first device, and a second detection condition in which a second event is detected in a second device before a first time elapses after the first event of the first device.3. The method of claim 2 , wherein the first condition further comprises a third detection condition in which a third event is detected in a third device before a second time elapses after the second event of the second device.4. The method of claim 2 , wherein the first condition further comprises a fourth detection condition in which a fourth event ...

Подробнее
08-02-2018 дата публикации

Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium

Номер: US20180040101A1
Автор: Xini KUANG
Принадлежит: Tencent Technology Shenzhen Co Ltd

A method for detecting a collision between a cylindrical collider and a convex body in a real-time virtual scenario performed at a computer includes: converting a cylindrical collider into a preset polygonal prism concentric to the cylindrical collider; transforming the preset polygonal prism to a local coordinate system of the convex body; obtaining a projection of the cylindrical collider on one or more testing axes according to each testing axis and the location of the preset polygonal prism in the local coordinate system of the convex body, and obtaining a projection of the convex body on each testing axis; and in accordance with a determination that the projections of the cylindrical collider and the convex body intersect with each other on each testing axis, moving the cylindrical collider away from the convex body in the real-time virtual scenario to avoid the collision.

Подробнее
24-02-2022 дата публикации

SYSTEM AND METHOD FOR ACCELERATED RAY TRACING

Номер: US20220058854A1
Автор: Cerny Mark Evan
Принадлежит:

A graphics processing unit (GPU) includes one or more processor cores adapted to execute a software-implemented shader program, and one or more hardware-implemented ray tracing units (RTU) adapted to traverse an acceleration structure to calculate intersections of rays with bounding volumes and graphics primitives. The RTU implements traversal logic to traverse the acceleration structure, stack management, and other tasks to relieve burden on the shader, communicating intersections to the shader which then calculates whether the intersection hit a transparent or opaque portion of the object intersected. Thus, one or more processing cores within the GPU perform accelerated ray tracing by offloading aspects of processing to the RTU, which traverses the acceleration structure within which the 3D environment is represented. 1. A method for graphics processing , comprising:executing, on a graphics processing unit (GPU), a shader program that performs ray tracing of a 3D environment represented by an acceleration structure;using a hardware-implemented ray tracing unit (RTU) within the GPU that traverses the acceleration structure at the request of the shader program; andusing, at the shader program, results of the acceleration structure traversal, wherein the RTU identifies intersections of rays with elements in the acceleration structure, indicates intersections to the shader program, and the shader program performs hit testing, determining whether a ray passed through a transparent portion of an element or hit a non-transparent portion of the element.2. The method of claim 1 , wherein the acceleration structure traversal by the RTU is asynchronous with respect to the shader program.3. The method of claim 1 , wherein the results of the acceleration structure traversal by the RTU include the detection of intersection between a ray and bounding volumes contained within the acceleration structure.4. The method of claim 1 , where the RTU processing includes maintenance of a ...

Подробнее
24-02-2022 дата публикации

UV MAPPING ON 3D OBJECTS WITH THE USE OF ARTIFICIAL INTELLIGENCE

Номер: US20220058859A1
Принадлежит:

Various embodiments set forth systems and techniques for generating seams for a 3D model. The techniques include generating, based on the 3D model, one or more inputs for one or more trained machine learning models; providing the one or more inputs to the one or more trained machine learning models; receiving, from the one or more trained machine learning models, seam prediction data generated based on the one or more inputs; and placing one or more predicted seams on the 3D model based on the seam prediction data. 1. A method for automatically generating seams for a three-dimensional (3D) model , the method comprising:generating, based on the 3D model, one or more representations of the 3D model as inputs for one or more trained machine learning models;generating a set of seam predictions associated with the 3D model by applying the one or more trained machine learning models to the one or more representations of the 3D model, wherein each seam prediction included in the set of seam predictions identifies a different seam along which the 3D model can be cut; andplacing one or more seams on the 3D model based on the set of seam predictions.2. The method of further comprising:dividing the 3D model into a plurality of groups; andfor each group of the plurality of groups, generating respective one or more representations of the 3D model as inputs for the one or more trained machine learning models.3. The method of claim 1 , wherein the one or more representations of the 3D model include one or more 2D images claim 1 , wherein the set of seam predictions indicates claim 1 , for each 2D image of the one or more 2D images claim 1 , a respective one or more seam predictions in the 2D image.4. The method of claim 3 , wherein placing the one or more seams on the 3D model includes claim 3 , for each 2D image of the one or more 2D images claim 3 , projecting the respective one or more seam predictions in the 2D image onto the 3D model.5. The method of claim 1 , wherein the one ...

Подробнее
24-02-2022 дата публикации

Image processing method and apparatus, and computer storage medium

Номер: US20220058888A1
Принадлежит: Shenzhen Sensetime Technology Co Ltd

An image processing method and apparatus, and a computer readable medium are provided. The method includes that includes: a first reference surface on which a virtual object is placed is determined based on an image collected by an image collection device; a to-be-placed virtual object and pose information of the image collection device relative to the first reference surface are acquired; a display size of the to-be-placed virtual object is determined based on the pose information; and the to-be-placed virtual object is rendered on the image according to the display size.

Подробнее
07-02-2019 дата публикации

FACILITATION OF CONCURRENT CONSUMPTION OF MEDIA CONTENT BY MULTIPLE USERS USING SUPERIMPOSED ANIMATION

Номер: US20190042178A1
Принадлежит:

Embodiments of apparatus, computer-implemented methods, systems, devices, and computer-readable media are described herein for facilitation of concurrent consumption of media content by a first user of a first computing device and a second user of a second computing device. In various embodiments, facilitation may include superimposition of an animation of the second user over the media content presented on the first computing device, based on captured visual data of the second user received from the second computing device. In various embodiments, the animation may be visually emphasized on determination of the first user's interest in the second user. In various embodiments, facilitation may include conditional alteration of captured visual data of the first user based at least in part on whether the second user has been assigned a trusted status, and transmittal of the altered or unaltered visual data of the first user to the second computing device. 130-. (canceled)31. A computer-implemented method , comprising:presenting, on a first computing device of a first user, a media content concurrently with presentation of the media content on a second computing device of a second user; andsuperimposing, by the first computing device, of a visual indication of the second user of the second computing device, partially over the media content presented on the first computing device, based on captured visual data of the second user of the second computing device.32. The computer-implemented method of claim 31 , wherein the captured visual data includes audio data or text data based upon speech of the second user.33. The computer-implemented method of claim 31 , further comprising receiving claim 31 , by the first computing device claim 31 , captured visual data of the second user claim 31 , from the second computing device.34. The computer-implemented method of claim 33 , further comprising determining claim 33 , by the first computing device claim 33 , an interest or ...

Подробнее
06-02-2020 дата публикации

PROGRAMMABLE RAY TRACING WITH HARDWARE ACCELERATION ON A GRAPHICS PROCESSOR

Номер: US20200043218A1
Принадлежит:

Apparatus and method for programmable ray tracing with hardware acceleration on a graphics processor. For example, one embodiment of a graphics processor comprises shader execution circuitry to execute a plurality of programmable ray tracing shaders. The shader execution circuitry includes a plurality of single instruction multiple data (SIMD) execution units. Sorting circuitry regroups data associated with one or more of the programmable ray tracing shaders to increase occupancy for SIMD operations performed by the SIMD execution units; and fixed-function intersection circuitry coupled to the shader execution circuitry detects intersections between rays and bounding volume hierarchies (BVHs) and/or objects contained therein and to provide results indicating the intersections to the sorting circuitry. 1. A graphics processing apparatus comprising:shader execution circuitry to execute a plurality of programmable shaders, the shader execution circuitry including a plurality of single instruction multiple data (SIMD) execution units; andsorting circuitry to regroup data associated with one or more of the programmable shaders to increase occupancy for SIMD operations performed by the SIMD execution units.2. The graphics processing apparatus of wherein the programmable shaders comprise programmable ray tracing shaders claim 1 , the graphics processing apparatus further comprising:fixed-function intersection circuitry coupled to the shader execution circuitry, the fixed function intersection circuitry to detect intersections between rays and bounding volume hierarchies (BVHs) and/or objects contained therein and to provide results indicating the intersections to the sorting circuitry.3. The graphics processing apparatus of wherein the sorting circuitry further comprises:a content addressable memory to store a plurality of entries, each entry identified by a particular shader record pointer.4. The graphics processing apparatus of wherein the sorting circuitry further ...

Подробнее
06-02-2020 дата публикации

Liquid simulation method, liquid interaction method and apparatuses

Номер: US20200043230A1
Автор: Tianxiang Zhang
Принадлежит: Tencent Technology Shenzhen Co Ltd

A liquid simulation method is provided for a graphics processing unit (GPU). The method includes obtaining initial information; determining two-dimensional meshes according to the initial information; mapping a plane of a to-be-simulated three-dimensional liquid into the two-dimensional meshes, and determining corresponding target meshes to which the plane of the to-be-simulated three-dimensional liquid is mapped in the two-dimensional meshes. The method also includes recording, in mesh points of the target meshes, corresponding liquid levels of plane coordinates of the to-be-simulated three-dimensional liquid, and obtaining a corresponding height field of the to-be-simulated three-dimensional liquid in the two-dimensional meshes; and rendering a three-dimensional liquid according to updating of the height field, to obtain a liquid simulation result.

Подробнее
18-02-2021 дата публикации

APPARATUS AND METHOD FOR A HIERARCHICAL BEAM TRACER

Номер: US20210049808A1
Принадлежит:

Apparatus and method for a hierarchical beam tracer. For example, one embodiment of an apparatus comprises: a beam generator to generate beam data associated with a beam projected into a graphics scene; a bounding volume hierarchy (BVH) generator to generate BVH data comprising a plurality of hierarchically arranged BVH nodes; a hierarchical beam-based traversal unit to determine whether the beam intersects a current BVH node and, if so, to responsively subdivide the beam into N child beams to test against the current BVH node and/or to traverse further down the BVH hierarchy to select a new BVH node, wherein the hierarchical beam-based traversal unit is to iteratively subdivide successive intersecting child beams and/or to continue to traverse down the BVH hierarchy until a leaf node is reached with which at least one final child beam is determined to intersect; the hierarchical beam-based traversal unit to generate a plurality of rays within the final child beam; and intersection hardware logic to perform intersection testing for any rays intersecting the leaf node, the intersection testing to determine intersections between the rays intersecting the leaf node and primitives bounded by the leaf node. 1. An apparatus comprising:a beam generator to generate beam data associated with a beam projected into a graphics scene;a bounding volume hierarchy (BVH) generator to generate BVH data comprising a plurality of hierarchically arranged BVH nodes;a hierarchical beam-based traversal unit to determine whether the beam intersects a current BVH node and, if so, to responsively perform at least one of subdividing the beam into N child beams to test against the current BVH node or to traverse further down the BVH hierarchy to select a new BVH node,wherein the hierarchical beam-based traversal unit is to iteratively subdivide successive intersecting child beams until a leaf node is reached with which at least one final child beam is determined to intersect;the hierarchical ...

Подробнее
15-02-2018 дата публикации

Combining user images and computer-generated illustrations to produce personalized animated digital avatars

Номер: US20180047200A1
Принадлежит: JibJab Media Inc

Animated frames may illustrate an animated face that has one or more facial features that change during the animation. Each change may be between a photographed facial feature of a real face and a corresponding drawn facial feature of a drawn face. Various related methods are also disclosed.

Подробнее
14-02-2019 дата публикации

AUTOMATED PLANNING SYSTEMS FOR PEDICLE SCREW PLACEMENT AND RELATED METHODS

Номер: US20190046269A1
Принадлежит:

Systems, methods and circuits can perform automated pedicle placement planning on 3D image data sets of the spine using global and local coordinate axes systems and ray casting to identify a center of the vertebral foramen and a center of a solid vertebral body for the local coordinate axis system. 1. An automated or semi-automated method of planning for placement of pedicle screws , comprising:providing a three dimensional (3D) image of a target vertebra of a patient;electronically defining a first coordinate axis system using a first axis extending in an anatomical right to left direction across a target vertebra;electronically ray casting the 3D image of the target vertebra in an anterior direction that is anterior to the first axis;electronically identifying a vertebral foramen (VF) based at least in part on the ray casting;electronically calculating a second coordinate axis system aligned with an orientation of the VF; andelectronically identifying placement and sizing of at least one pedicle screw using the second coordinate axis system.2. The method of claim 1 , wherein the first and second coordinate systems are Cartesian coordinate systems claim 1 , wherein the first axis is a first x-axis claim 1 , a z-axis extends in a superior/inferior direction and a y-axis extends in an anterior/posterior direction.3. The method of claim 1 , wherein the ray casting identifies points on a boundary of bone tissue.4. The method of claim 1 , further comprising displaying the provided 3D image of the target vertebra claim 1 , wherein the first x-axis is generated based on user input of first and second points claim 1 , spaced apart in the right to left direction claim 1 , on a posterior of the displayed target vertebra.5. The method of claim 4 , wherein the identifying the VF is carried out by: i) determining a midpoint between the first and second points from the user input claim 4 , ii) for points along a line extending in the anterior direction from the midpoint claim 4 ...

Подробнее
15-02-2018 дата публикации

MOBILE TERMINAL AND METHOD OF OPERATING THE SAME

Номер: US20180048823A1
Принадлежит: LG ELECTRONICS INC.

A mobile terminal includes a display unit; and a controller configured to obtain use intention information indicating an intention to use omnidirectional content, extract a partial image included in the omnidirectional content, and display the extracted partial image through the display unit, wherein the extracted partial image is changed according to the use intention information. 1. A mobile terminal comprising:a display; and extract a partial image included in omnidirectional content based on a user input, the user input including use intention information; and', 'cause the display to display the extracted partial image such that a first partial image is displayed when the use intention information is first use intention information and a second partial image is displayed when the use intention information is second use intention information., 'a controller configured to2. The mobile terminal of claim 1 , wherein when the use intention information is set according to a profile picture setting request claim 1 , the controller is further configured to:extract a plurality of faces included in the omnidirectional content; andcause the display to display a plurality of thumbnails corresponding to the extracted plurality of faces.3. The mobile terminal of claim 2 , wherein:the controller is further configured to cause the display to display an image corresponding to one of the plurality of thumbnails in response to selection of the one of the plurality of thumbnails; andthe image corresponding to the one of the plurality of thumbnails is a portion of the omnidirectional content.4. The mobile terminal of claim 3 , wherein the controller is further configured to cause the display to display a crop box for editing a face region corresponding to the selected thumbnail.5. The mobile terminal of claim 1 , wherein the controller is further configured to extract a most frequently viewed image as the partial image from the omnidirectional content when the use intention ...

Подробнее
03-03-2022 дата публикации

Method for Simulating Fluids Interacting with Submerged Porous Materials

Номер: US20220068002A1
Принадлежит: Unity Technologies SF, Weta Digital Ltd

A method for generating one or more visual representations of a porous media submerged in a fluid is provided. The method can be performed using a computing device operated by a computer user or artist. The method includes defining a field comprising fluid parameter values for the fluid, the fluid parameter values comprising fluid velocity values and pore pressures. The method includes generating a plurality of particles that model a plurality of objects of the porous media, the plurality of objects being independently movable with respect to one another, determining values of motion parameters based at least in part on the field when the plurality of particles are submerged in the fluid, buoyancy and drag forces being used to determine relative motion of the plurality of particles and the fluid, and generating the one or more visual representations of the plurality of objects submerged in the fluid based on the values of the motion parameters.

Подробнее
03-03-2022 дата публикации

Coherency Gathering for Ray Tracing

Номер: US20220068008A1
Принадлежит:

A system and method for coherency gathering for rays in a ray tracing system. The ray tracing system uses a hierarchical acceleration structure comprising a plurality of nodes including upper level nodes and lower level nodes. For each instance where one of the lower level nodes is a child of one of the upper level nodes, an instance transform is defined, specifying the relationship between a first coordinate system of the upper level node and the second coordinate system for that instance of the lower level node. The system provides an instance transform cache for storing a plurality of these instance transforms while conducting intersection testing. 1. A method of coherency gathering for rays in a ray tracing system , the method comprising:defining a plurality of rays, each ray having associated with it ray information defining the ray in a first coordinate system,defining a hierarchical acceleration structure comprising a plurality of nodes including upper level nodes and lower level nodes, each node of the acceleration structure having geometry information associated with it, wherein the geometry information of the upper level nodes is defined in the first coordinate system and the geometry information of each of the lower level nodes is defined in a second coordinate system different from the first coordinate system,wherein the lower level nodes are instantiated within the acceleration structure in one or more instances, each instance associated with an instance transform specifying the relationship between the first coordinate system and the respective second coordinate system for that instance,the method further comprising:storing the geometry information and the instance transforms in a memory;gathering together a plurality of groups of rays, wherein each group requires intersection testing against a respective node in the hierarchical acceleration structure;selecting one of the groups for intersection testing, wherein the respective node to be tested ...

Подробнее
03-03-2022 дата публикации

Method of adjusting grid spacing of height map for autonomous driving

Номер: US20220068017A1
Автор: Keon Chang Lee
Принадлежит: Hyundai Motor Co, Kia Corp

A method of adjusting a grid spacing of a height map for autonomous driving, may include acquiring a 2D image of a region ahead of a vehicle, generating a depth map using depth information on an object present in the 2D image, converting the generated depth map into a 3D point cloud, generating the height map by mapping the 3D point cloud onto a grid having a predetermined size, and adjusting a grid spacing of the height map in consideration of the driving state of the vehicle relative to the object.

Подробнее
03-03-2022 дата публикации

SYSTEMS AND METHODS FOR MODELLING PHYSIOLOGIC FUNCTION USING A COMBINATION OF MODELS OF VARYING DETAIL

Номер: US20220068495A1
Принадлежит:

Computational methods are used to create cardiovascular simulations having desired hemodynamic features. Cardiovascular modeling methods produce descriptions of blood flow and pressure in the heart and vascular networks. Numerical methods optimize and solve nonlinear equations to find parameter values that result in desired hemodynamic characteristics including related flow and pressure at various locations in the cardiovascular system, movements of soft tissues, and changes for different physiological states. The modeling methods employ simplified models to approximate the behavior of more complex models with the goal of to reducing computational expense. The user describes the desired features of the final cardiovascular simulation and provides minimal input, and the system automates the search for the final patient-specific cardiovascular model. 119-. (canceled)20. A computer-implemented method for generating a reduced-order model of a cardiovascular system , comprising:determining at least one model objective based on at least one biological or physiological measurement of at least one parameter of an anatomical structure of a patient;receiving patient-specific image data associated with at least a portion of the anatomical structure of the patient;generating, based on the patient-specific image data, a three-dimensional model representing at least a portion of the anatomical structure of the patient, the three-dimensional model including portions representing at least one inlet and at least one outlet of blood flow; one or more boundary condition parameters for one or more of the at least one inlet or the at least one outlet; and', 'one or more lumped parameters associated with at least one property of the anatomical structure;, 'generating a reduced order model that includes optimizing the values of the one or more boundary condition parameters to satisfy the at least one model objective, using the reduced order model, for a current iteration of the one or ...

Подробнее
26-02-2015 дата публикации

INTELLIGENT CROPPING OF IMAGES BASED ON MULTIPLE INTERACTING VARIABLES

Номер: US20150055870A1
Принадлежит: GOOGLE INC.

Methods and systems for intelligently cropping images, including receiving, over a computer network, a source image, and then associating a first identifier tag with a first object in the source image. A cropped image is generated from the source image wherein the cropping is based on the first object. The system and method then notifying a first user that the first identifier tag is associated with the first object in the cropped image, wherein the notification includes the cropped image. 1. A computer-implemented method for intelligently cropping images , comprising:receiving, over a computer network, a source image;associating a first identifier tag with a first object in the source image;generating a cropped image from the source image, based on the first object; andnotifying a first user that the first identifier tag is associated with the first object in the cropped image, wherein the notification includes the cropped image.224-. (canceled) The Internet provides access to a wide range of resources with one of the fastest growing uses being social media. Social media includes web-based and mobile-based technologies that provide for interactive dialogues of user-generated content. Such content includes text, photos, videos, magazines, internet forums, weblogs, social blogs, podcasts, rating, geographic tracking, and social bookmarking.Using social media a user can post a piece of content, e.g., a photo, and within seconds that content is accessible by a large number of people and in some cases over one-hundred million people. Such access to information is both exhilarating and also daunting. For example, a photo of a person could get posted to a social media site, which results in that person receiving a message that they have been tagged in a photo. The message indicates that a photograph that includes their image has been posted to the social media site, but gives no indication as to the contents of the image. The photo could contain just the single person or ...

Подробнее
03-03-2022 дата публикации

METHODS AND DEVICES FOR CAPTURING AN ITEM IMAGE

Номер: US20220070388A1
Принадлежит: Shopify Inc.

Methods and devices for capturing images for use in online retail of product items. A mobile device having a camera may be used when building a product item record having an associated image of the product. The method of capturing a suitable image may include obtaining a product item display page for a product item, the product item display page having a designated portion defined for display of an image of the product item; obtaining a real-time live stream of images from the camera; processing the live stream from the camera to create a processed live stream, and displaying the product item display page with the processed live stream displayed within the designated portion; and storing a processed image for the product item in association with an item record for the product item. 1. A computer-implemented method for capturing product images for online retail using a mobile device having a camera , the method comprising:obtaining a product item display page for a product item, the product item display page having a designated portion defined for display of an image of the product item;obtaining a real-time live stream of images from the camera;processing the live stream from the camera to create a processed live stream;displaying the product item display page with the processed live stream displayed within the designated portion; andstoring a processed image for the product item in association with an item record for the product item.2. The computer-implemented method of claim 1 , wherein processing further includes detecting the product item in the live stream.3. The computer-implemented method of claim 2 , wherein the processing of the live stream is based on the detected product item and includes cropping the live stream.4. The computer-implemented method of claim 3 , wherein the cropping is based on the geometry of the designated portion.5. The computer-implemented method of claim 2 , wherein the processing of the live stream is based on the detected product ...

Подробнее
22-02-2018 дата публикации

SYSTEM AND METHOD FOR REPRESENTING A FIELD OF CAPTURE AS PHYSICAL MEDIA

Номер: US20180052446A1
Принадлежит: Scandy, LLC

The invention is directed to a system and method for representing how a photograph was captured in relation to the field capture, and mapping this onto a shape in a 3D dimensional print. More specifically, a group of images, or single image captured through a lens with field of view distortion, is captured and stored together as a group. The images may be stitched together to form a single image. Once stitched together, a three-dimensional file is created and stored to the system. A server then provides the three-dimensional file to a three-dimensional printer for printing. Once printed, the three-dimensional object is packaged and mailed to the sender. 1. A method of representing a field of capture in the form of a physical media comprising:a. capturing, by a user, a plurality of images, wherein the plurality of images forms the field of capture;b. storing the plurality of images in a database;c. requesting, by a user via a user interface, a server to access the images from the database;d. communicating, by the server, based on the received user request, the requested images;e. receiving, by the server, requested images from the database based on the user's request;f. generating, by the server, output information that includes the requested images being stitched together as a stitched image in a file;g. storing the stitched image in the database;h. determining, by the server, the orientation of the stitched image on a sphere or cylinder, wherein the orientation of the image is based on a user's field of capture when capturing each of the plurality of images;i. generating, by the server, a file relating to a three-dimensional print, wherein the file represents the stitched image as the field of capture;j. storing the file in the database;k. providing, by the server to a printer, said file relating to the user's request; andl. printing, by the printer, said file, received from the server.2. The method of claim 1 , wherein claim 1 , the server utilizes metadata to ...

Подробнее
14-02-2019 дата публикации

Digital pathological section scanning system

Номер: US20190050980A1

The present invention discloses a digital pathological section scanning system and relates to the field of a section scanning technology. The system comprises a scanning end, an image processing end, a remote server, a first client end and a second client end; wherein, the scanning end scans a pathological section to form an original pathological section image and transmits the original pathological section image to the image processing end for processing; the image processing end processes the original pathological section image to form a digital pathological section image and sends the digital pathological section image to the remote server; the first client end transmits medical record information including attending physician information to the remote server; the remote server associates the digital pathologic section image with the attending physician information and saves the digital pathological section image in the storage unit corresponding to the attending physician information.

Подробнее
25-02-2016 дата публикации

Time-Continuous Collision Detection Using 3D Rasterization

Номер: US20160055666A1
Принадлежит: Individual

We present a technique that utilizes a motion blur (three dimensional) rasterizer to augment the PCS culling technique so that it can be used for continuous collision detection, which to the best of our knowledge has not been done before for motion blur using a graphics processor.

Подробнее