Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 31. Отображено 28.
04-07-2017 дата публикации

Active stereo with satellite device or devices

Номер: US0009697424B2

The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination.

Подробнее
19-12-2017 дата публикации

Automated camera array calibration

Номер: US0009846960B2

The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.

Подробнее
27-03-2018 дата публикации

Depth imaging system based on stereo vision and infrared radiation

Номер: US9928420B2

The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output.

Подробнее
18-04-2013 дата публикации

GENERATING FREE VIEWPOINT VIDEO USING STEREO IMAGING

Номер: US20130095920A1
Принадлежит: MICROSOFT CORPORATION

Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map. 1. A method for generating a video using an active infrared (IR) stereo module , comprising: projecting an IR dot pattern onto the scene;', 'capturing stereo images from each of two or more synchronized IR cameras;', 'detecting a plurality of dots within the stereo images;', 'computing a plurality of feature descriptors corresponding to the plurality of dots in the stereo images;', 'computing a disparity map between the stereo images; and', 'generating a depth map for the scene using the disparity map;, 'computing a depth map for a scene using the active IR stereo module, wherein computing the depth map comprisesgenerating a point cloud for the scene in three-dimensional space using the depth map;generating a mesh of the point cloud;generating a projective texture map for the scene from the mesh of the point cloud; andgenerating the video for the scene using the projective texture map.2. The method of claim 1 , wherein the video is a Free Viewpoint Video (FVV).3. The method of claim 1 , comprising:displaying the video on a display device; andenabling space-time navigation by a user during video ...

Подробнее
25-04-2013 дата публикации

GENERATING A DEPTH MAP

Номер: US20130100256A1
Принадлежит: MICROSOFT CORPORATION

Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map. 1. A method for generating a depth map , comprising:projecting an infrared (IR) dot pattern onto a scene;capturing stereo images from each of two or more synchronized IR cameras;detecting a plurality of dots within the stereo images;computing a plurality of feature descriptors corresponding to the plurality of dots in the stereo images;computing a disparity map between the stereo images; andgenerating a depth map for the scene using the disparity map.2. The method of claim 1 , further comprising generating a depth map for each of two or more active IR stereo modules claim 1 , wherein each active IR stereo module comprises an IR projector claim 1 , two or more synchronized IR cameras claim 1 , one or more synchronized RGB camera claim 1 , or any combinations thereof.3. The method of claim 2 , comprising genlocking the two or more active IR stereo modules using a synchronization signal claim 2 , wherein genlocking the two or more active IR stereo modules comprises genlocking all cameras within the two or more active IR stereo modules.4. The method of claim 2 , comprising combining the depth maps for each of the two or more active IR stereo modules to create a constructive view of the scene.5. The method of claim 2 , comprising projecting a plurality of IR dot patterns onto the scene from any number of the two or more active IR stereo modules.6. The method of claim 5 , comprising utilizing the plurality of IR dot patterns as one mutually-contributed IR dot pattern.7. ...

Подробнее
05-12-2013 дата публикации

AUTOMATED CAMERA ARRAY CALIBRATION

Номер: US20130321589A1
Принадлежит: MICROSOFT CORPORATION

The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration. 1. A computer-implemented process for calibrating an array of capture devices , comprising the process actions of:using hybrid capture devices, which capture both intensity data and depth data, to simultaneously capture depth maps and corresponding intensity images of a scene;finding common features in the depth maps from two hybrid capture devices;computing a rough calibration of hybrid capture devices using the shared depth map features.2. The computer-implemented process of claim 1 , further comprising the process actions of: separating the moving and non-moving data of the scene in the depth images and intensity images of the scene; and', 'using only the non-moving data of the scene for finding the common features., 'if the hybrid capture devices are not temporally synchronized,'}3. The computer-implemented process of wherein the depth maps are down sampled prior to finding the common features.4. The computer-implemented process of claim 1 , wherein once the rough calibration is found claim 1 , the intensity data is used to refine the rough calibration.5. The computer-implemented process of wherein the intensity data is downsampled prior to using the RGB data to refine the rough calibration.6. The computer-implemented process of claim 3 , wherein the relationship between the ...

Подробнее
05-12-2013 дата публикации

GLANCING ANGLE EXCLUSION

Номер: US20130321590A1
Автор: Kirk Adam G.
Принадлежит: MICROSOFT CORPORATION

The glancing angle exclusion technique described herein selectively limits projective texturing near depth map discontinuities. A depth discontinuity is defined by a jump between a near-depth surface and a far-depth surface. The claimed technique can limit projective texturing on near and far surfaces to a different degree—for example, the technique can limit far-depth projective texturing within a certain distance to a depth discontinuity but not near-depth projective texturing. 1. A computer-implemented process for creating a synthetic video from images captured from an array of cameras , comprising the process actions of:(a) capturing images of a scene using the array of cameras arranged in three dimensional (3D) space relative to the scene;(b) estimating camera data and 3D geometric information that describes objects in the captured scene both spatially and temporally;(c) generating a set of geometric proxies which describe objects in the scene as a function of time using the extracted camera and 3D geometric data(d) determining silhouette boundaries of the geometric proxies in the captured images of the scene;(e) applying projective texture from the captured images to the geometric proxies while masking the projective texture which exceeds the boundaries of the silhouette.2. The computer-implemented process of wherein the silhouette boundaries are determined by using an edge detector on a depth map associated with each captured image of the object.3. The computer-implemented process of wherein the projective texture is masked by a blend mask.4. The computer-implemented process of wherein any large depth pixels that are within a variable number of pixels from an edge shared with a small depth pixel are turned off in the blend mask.5. The computer-implemented process of wherein the blend mask is checked before an object is rendered.6. The computer-implemented process of claim 1 , wherein the projective texture is masked using a floating point map.7. The computer- ...

Подробнее
05-12-2013 дата публикации

VIEW FRUSTUM CULLING FOR FREE VIEWPOINT VIDEO (FVV)

Номер: US20130321593A1
Принадлежит: MICROSOFT CORPORATION

The view frustum culling technique described herein allows Free Viewpoint Video (FVV) or other 3D spatial video rendering at a client by sending only the 3D geometry and texture (e.g., RGB) data necessary for a specific viewpoint or view frustum from a server to the rendering client. The synthetic viewpoint is then rendered by the client by using the received geometry and texture data for the specific viewpoint or view frustum. In some embodiments of the view frustum culling technique, the client has both some texture data and 3D geometric data stored locally if there is sufficient local processing power. Additionally, in some embodiments, additional spatial and temporal data can be sent to the client to support changes in the view frustum by providing additional geometry and texture data that will likely be immediately used if the viewpoint is changed either spatially or temporally. 1. A computer-implemented process for receiving spatial three dimensional video , comprising: receiving only texture data and geometric data for a given view frustum of a spatial three dimensional video from a server at a client;', 'rendering the given viewpoint of the spatial three dimensional video at the client using the downloaded texture and geometric data for the given view frustum., 'using a client computing device for2. The computer-implemented process of wherein the client specifies the given view frustum to the server before the texture data and geometric data are downloaded to the client.3. The computer-implemented process of wherein the client receives texture data and geometric data computed by the server based on a viewpoint received from the client.4. The computer-implemented process of claim 1 , further comprising:checking if texture data or geometric data has been previously downloaded to the client; andnot downloading the texture data or the geometric data which has previously downloaded to the client again.5. The computer-implemented process of wherein additional ...

Подробнее
07-02-2019 дата публикации

System and method for compressing and decompressing time-varying surface data of a 3-dimensional object using a video codec

Номер: US20190043257A1
Принадлежит: Onmivor Inc

A processor implemented method for compressing time-varying surface data of a 3 dimensional object in a global digital space having frames, using a video encoder that supports a video data compression algorithm, the video encoder being coupled to a transmitter. The method includes the steps of (i) decomposing the time-varying surface data into at least one surface representation that is encoded in an oriented bounding box, (ii) transforming the oriented bounding box into a canonical camera representation for each frame to obtain canonical coordinates for the at least one surface representation, (iii) converting each of the at least one surface representation into at least one bounding box video pair that includes a grayscale video representing depth, and a color video and (iv) tiling the at least one bounding box video pair for each frame to produce a tiled bounding box video.

Подробнее
10-07-2014 дата публикации

Stereo Image Matching

Номер: US20140192158A1
Принадлежит: MICROSOFT CORPORATION

The description relates to stereo image matching to determine depth of a scene as captured by images. More specifically, the described implementations can involve a two-stage approach where the first stage can compute depth at highly accurate but sparse feature locations. The second stage can compute a dense depth map using the first stage as initialization. This improves accuracy and robustness of the dense depth map. 1. A system , comprising:a processor configured to receive corresponding images of a scene from a pair of cameras, the corresponding images including features added to the scene at wavelengths of light not visible to the human eye;the processor configured to implement a sparse component configured to employ a sparse location-based matching algorithm to locate the features in the corresponding images and to determine depths of individual features; and,the processor configured to implement a dense component configured to employ a nearest neighbor field (NNF) stereo matching algorithm to the corresponding images utilizing the depths of the individual features to find corresponding pixels in the corresponding images.2. The system of claim 1 , wherein the system includes the pair of cameras.3. The system of claim 2 , wherein the system further includes at least one visible light camera.4. The system of claim 1 , wherein the wavelengths of light not visible to the human eye are infrared (IR) wavelengths and the pair of cameras are infrared (IR) cameras.5. The system of claim 4 , further comprising an IR projector configured to project the features on the scene.6. The system of claim 5 , wherein the IR projector includes a random feature generator claim 5 , and wherein the features have a width of about 3 to about 5 pixels in the pair of IR cameras.7. The system of claim 1 , wherein the processor claim 1 , the sparse component claim 1 , and the dense component are manifest as a system on a chip.8. The system of claim 1 , wherein the processor is manifest as ...

Подробнее
30-05-2019 дата публикации

METHODS FOR STREAMING VISIBLE BLOCKS OF VOLUMETRIC VIDEO

Номер: US20190166410A1
Принадлежит: Onmivor, Inc.

A processor-implemented method for streaming visible blocks of volumetric video to a client device during a predefined time period is provided. The method includes (i) receiving at least one block description file from a content server, (ii) processing each block description in the at least one block description file, to determine the visible blocks that are selected from a set of blocks, that are capable of being visible to a viewer of the client device during the predefined time period, based on a 3D position, size, and an orientation of each block in the set of blocks and at least one view parameter of a user of the client device, (iii) transmitting a request for the visible blocks, to the content server, and (iv) receiving the visible blocks as a visible blocks video at the client device. 1. A processor-implemented method for streaming a set of visible blocks of volumetric video that correspond to a predefined time period from a content server , the method comprising:receiving at least one block description file at a client device, wherein the at least one block description file comprises a set of block descriptions associated with the set of blocks for the predefined time period, wherein for each block in the set of blocks, a block description for each block comprises a 3D position, size, and an orientation of each block;processing each block description in the at least one block description file, at the client device, to determine the visible blocks that are selected from the set of blocks, wherein the visible blocks are a subset of the set of blocks, that are capable of being visible to a viewer of the client device within the predefined time period, wherein the visible blocks are determined based on the 3D position, size, and the orientation of each block in the set of blocks and at least one view parameter of a user of the client device;transmitting a request for the visible blocks, from the client device to the content server; andreceiving the visible ...

Подробнее
16-10-2014 дата публикации

ACTIVE STEREO WITH ADAPTIVE SUPPORT WEIGHTS FROM A SEPARATE IMAGE

Номер: US20140307047A1
Принадлежит:

The subject disclosure is directed towards stereo matching based upon active illumination, including using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images. To correlate pixels in actively illuminated stereo images, adaptive support weights computations may be used to determine similarity of patches corresponding to the pixels. In order to obtain meaningful adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image. 1. A method comprising:processing a plurality of images, including actively illuminated stereo images, and a non-actively illuminated image;determining weights for a patch in the non-actively illuminated image that corresponds to patches in the actively illuminated stereo images, in which each of the patches is based upon a reference pixel in one of the images; andusing the support weights to determine a similarity score between the corresponding patches in the actively illuminated stereo images.2. The method of wherein the images are of a scene actively illuminated with infrared (IR) light claim 1 , and wherein capturing the non-actively illuminated image includes capturing the scene as a visible spectrum image.3. The method of wherein the images are of a scene actively illuminated with visible spectrum light claim 1 , and wherein capturing the non-actively illuminated image includes capturing the scene as an infrared image.4. The method of wherein the images are of a scene actively illuminated with infrared light in a part of the infrared spectrum claim 1 , and wherein capturing the non-actively illuminated image includes using a notch filter to capture the scene with the part of the spectrum that contains the active illumination removed.5. The method of wherein capturing the plurality of images comprises capturing one actively illuminated stereo image and the non- ...

Подробнее
16-10-2014 дата публикации

Multimodal Foreground Background Segmentation

Номер: US20140307056A1
Принадлежит:

The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image. 1. A system comprising , a foreground background segmentation framework , including a multimodal segmentation algorithm configured to accept contribution factors from different segmentation modalities and process the contribution factors to determine foreground versus background data for each element of an image that is useable to determine whether that element is a foreground or background element.2. The system of wherein at least one element comprises a pixel.3. The system of wherein the foreground versus background data comprises a probability score.4. The system of wherein the different segmentation modalities correspond to any of: a red claim 1 , green blue (RGB) background subtraction claim 1 , chroma keying claim 1 , infrared (IR) background subtraction claim 1 , a current computed depth versus previously computed background depth evaluation claim 1 , or a current depth versus threshold depth evaluation.5. The system of wherein the foreground background segmentation framework is further configured to output the foreground versus background data for each element to a global binary segmentation algorithm.6. The system of wherein the framework is configured to apply a weight for each contribution factor.7. The system of wherein the framework is configured to select a weight set from among a plurality of weight sets to apply the weight for each ...

Подробнее
16-10-2014 дата публикации

ROBUST STEREO DEPTH SYSTEM

Номер: US20140307058A1
Принадлежит:

The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output. 1. A system comprising , one or more infrared (IR) cameras configured to capture one or more images of a scene illuminated with structured light , and a projector configured to output a structured light illumination pattern at an IR frequency or frequencies that are capable of being filtered out by a notch filter while being sensed by the one or more IR cameras.2. The system of further comprising an IR camera coupled to a notch filter to capture a clean IR image of the scene.3. The system of further comprising a visible light spectrum camera configured to capture a color image of the scene.4. The system of further comprising an IR camera coupled to a notch filter to capture a clean IR image of the scene claim 1 , and a visible light spectrum camera configured to capture a color image of the scene claim 1 , in which the IR camera and visible light spectrum camera are separate cameras claim 1 , or in which the IR camera and visible light spectrum camera are combined into a single camera.5. The system of further comprising at least one IR light source that outputs at least some light that is not filtered out by the notch filter.6. The system of wherein at least one IR camera is coupled to a narrow bandpass filter.7. The system of wherein at least two IR cameras ...

Подробнее
16-10-2014 дата публикации

EXTRACTING TRUE COLOR FROM A COLOR AND INFRARED SENSOR

Номер: US20140307098A1
Принадлежит:

The subject disclosure is directed towards color correcting for infrared (IR) components that are detected in the R, G, B parts of a sensor photosite. A calibration process determines true R, G, B based upon obtaining or estimating IR components in each photosite, such as by filtering techniques and/or using different IR lighting conditions. A set of tables or curves obtained via offline calibration model the correction data needed for online correction of an image. 1. A method , comprising , calibrating a color correction transform that corrects for infrared light in at least one of red , green or blue parts of a photosite , including capturing , via a sensor comprised of photosites , ground truth color data as raw image data , capturing , via the sensor through a long pass filter , the ground truth color data as long-pass-filtered image data , subtracting the long-pass-filtered image data for at least one of the red , green or blue parts of a photosite of the sensor from the raw image data for each corresponding part of the photosite to obtain true color data values for the photosite , and using data corresponding to the true color data values to produce one or more tables or curves that are accessible during online usage to color correct an image.2. The method of further comprising claim 1 , accessing data in the one or more tables or curves to color correct an online-captured image.3. The method of wherein the one or more tables or curves correspond to one or more three-by-three matrices.4. The method of wherein the one or more tables or curves correspond to one or more three-by-four matrices.5. The method of wherein capturing the ground truth color data comprises capturing an image of a color chart.6. The method of further comprising claim 5 , illuminating the color chart with infrared light.7. The method of further comprising claim 1 , normalizing the true color data values to provide the data corresponding to the true color data values.8. The method of ...

Подробнее
16-10-2014 дата публикации

ACTIVE STEREO WITH SATELLITE DEVICE OR DEVICES

Номер: US20140307953A1
Принадлежит:

The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination. 1. A method comprising , receiving image-related data from one device at another device , and enhancing a first set of depth data or color data , or both , based at least in part upon the image-related data and pose information of the one device.2. The method of wherein the one device comprises a satellite device and the other device comprises a base station claim 1 , and further comprising claim 1 , projecting a light pattern from the base station.3. The method of wherein the one device comprises a satellite device claim 1 , and further comprising capturing at least one image at the other device claim 1 , and using the at least one image to determine the pose information of the satellite device.4. The method of further comprising claim 1 , capturing at least one image at the other device and using the at least one image to compute the first set of depth data.5. The method of wherein enhancing the first set of depth data comprises replacing at least some of the data in the first set with other depth data corresponding to at least part of the image-related data. ...

Подробнее
13-08-2015 дата публикации

ENVIRONMENT-DEPENDENT ACTIVE ILLUMINATION FOR STEREO MATCHING

Номер: US20150229915A1
Принадлежит: MICROSOFT CORPORATION

The subject disclosure is directed towards controlling the intensity of illumination of a scene or part of a scene, including to conserve illumination power. Quality of depth data in stereo images may be measured with different illumination states; environmental conditions, such as ambient light, natural texture may affect the quality. The illumination intensity may be controllably varied to obtain sufficient quality while conserving power. The control may be directed to one or more regions of interest corresponding to an entire scene or part of a scene. 1. A system comprising , a controller , the controller coupled to a projector set that projects a light pattern towards a scene , the projector set comprising one or more projection elements , the controller configured to receive data corresponding to environmental conditions and control the projector set to selectively illuminate at least part of the scene based upon the data.2. The system of wherein the data corresponding to the environmental conditions comprise at least part of stereo image data.3. The system of wherein the data corresponding to the environmental conditions comprises quality data claim 2 , and further comprising an image processing component configured to process the at least part of the stereo images to obtain a quality measure as the data corresponding to the environmental conditions.4. The system of wherein the quality data corresponds to texture claim 3 , or variance data claim 3 , or both texture and variance data.5. The system of wherein the image processing component generates a depth map from at least part of the stereo images claim 4 , including by combining different parts of the stereo image data based upon the quality measure.6. The system of wherein the controller is configured to turn off and turn on at least one of the one or more projection elements.7. The system of wherein the controller is configured to ramp power up or down claim 1 , or both up and down claim 1 , to at least ...

Подробнее
03-09-2020 дата публикации

Systems and methods for generating a visibility counts per pixel of a texture atlas associated with a viewer telemetry data

Номер: US20200279385A1
Принадлежит: Onmivor Inc

A processor-implemented method of generating a three-dimensional (3D) volumetric video with an overlay representing visibility counts per pixel of a texture atlas, associated with a viewer telemetry data is provided. The method includes (i) capturing the viewer telemetry data, (ii) determining a visibility of each pixel in the texture atlas associated with a 3D content based on the viewer telemetry data, (iii) generating at least one visibility counts per pixel of the texture atlas based on the visibility of each pixel in the texture atlas, and (iv) generating one of: the 3D volumetric video with the overlay of at least one heat map associated with the viewer telemetry data, using the at least one visibility counts per pixel and a curated selection of the 3D volumetric content based on the viewer telemetry data, using the visibility counts per pixel.

Подробнее
26-09-2019 дата публикации

System and method for compressing and decompressing surface data of a 3-dimensional object using an image codec

Номер: US20190295293A1
Принадлежит: Omnivor Inc

A processor implemented method for compressing surface data of a 3 dimensional object in a global digital space, using an image encoder that supports an image data compression algorithm, the image encoder being coupled to a transmitter. The method includes the steps of (i) decomposing the surface data into at least one surface representation that is encoded in an oriented bounding box, (ii) transforming the oriented bounding box into a canonical camera representation to obtain canonical coordinates for the at least one surface representation, (iii) converting each of the at least one surface representation into at least one bounding box image pair that includes a grayscale image representing depth, and a color image and (iv) tiling the at least one bounding box image pair to produce a tiled bounding box image.

Подробнее
12-12-2019 дата публикации

MULTIMODAL FOREGROUND BACKGROUND SEGMENTATION

Номер: US20190379873A1
Принадлежит:

The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image. 1. A system comprising , a foreground background segmentation framework , including a multimodal segmentation algorithm configured to accept contribution factors from different segmentation modalities and process the contribution factors to determine foreground versus background data for each element of an image that is useable to determine whether that element is a foreground or background element.2. The system of wherein at least one element comprises a pixel.3. The system of wherein the foreground versus background data comprises a probability score.4. The system of wherein the different segmentation modalities correspond to any of: a red claim 1 , green blue (RGB) background subtraction claim 1 , chroma keying claim 1 , infrared (IR) background subtraction claim 1 , a current computed depth versus previously computed background depth evaluation claim 1 , or a current depth versus threshold depth evaluation.5. The system of wherein the foreground background segmentation framework is further configured to output the foreground versus background data for each element to a global binary segmentation algorithm.6. The system of wherein the framework is configured to apply a weight for each contribution factor.7. The system of wherein the framework is configured to select a weight set from among a plurality of weight sets to apply the weight for each ...

Подробнее
23-04-2019 дата публикации

Extracting true color from a color and infrared sensor

Номер: US10268885B2
Принадлежит: Microsoft Technology Licensing LLC

The subject disclosure is directed towards color correcting for infrared (IR) components that are detected in the R, G, B parts of a sensor photosite. A calibration process determines true R, G, B based upon obtaining or estimating IR components in each photosite, such as by filtering techniques and/or using different IR lighting conditions. A set of tables or curves obtained via offline calibration model the correction data needed for online correction of an image.

Подробнее
18-04-2013 дата публикации

Generating free viewpoint video using stereo imaging

Номер: WO2013056188A1
Принадлежит: MICROSOFT CORPORATION

Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.

Подробнее
27-10-2020 дата публикации

Active stereo with satellite device or devices

Номер: CA2907895C
Принадлежит: Microsoft Technology Licensing LLC

The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination.

Подробнее
03-02-2021 дата публикации

Extracting true color from a color and infrared sensor

Номер: EP2987320B1
Принадлежит: Microsoft Technology Licensing LLC

Подробнее
23-10-2014 дата публикации

Active stereo with satellite device or devices

Номер: CA2907895A1
Принадлежит: Microsoft Technology Licensing LLC

The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination.

Подробнее
04-08-2015 дата публикации

Generating a depth map

Номер: US9098908B2
Принадлежит: Microsoft Technology Licensing LLC

Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.

Подробнее
10-06-2020 дата публикации

System and method for compressing and decompressing time-varying surface data of a 3-dimensional object using a video codec

Номер: EP3662660A1
Принадлежит: Omnivor Inc

A processor implemented method for compressing time-varying surface data of a 3 dimensional object in a global digital space having frames, using a video encoder that supports a video data compression algorithm, the video encoder being coupled to a transmitter. The method includes the steps of (i) decomposing the time-varying surface data into at least one surface representation that is encoded in an oriented bounding box, (ii) transforming the oriented bounding box into a canonical camera representation for each frame to obtain canonical coordinates for the at least one surface representation, (iii) converting each of the at least one surface representation into at least one bounding box video pair that includes a grayscale video representing depth, and a color video and (iv) tiling the at least one bounding box video pair for each frame to produce a tiled bounding box video.

Подробнее