Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 7635. Отображено 100.
05-01-2012 дата публикации

Image processing apparatus, method and recording medium

Номер: US20120002897A1
Автор: Akira Hamada
Принадлежит: Casio Computer Co Ltd

A filter coefficient calculating unit 81 sets as a filter window a range, which contains plural coordinates including an attention coordinate “p” in an initial alpha map, and uses a pixel value Ip at a coordinate of an original image corresponding to a coordinate “p” in the filter window and a pixel value Iq at a coordinate of the original image corresponding to a coordinate “q” in the filter window to calculate a filter coefficient Kq at the coordinate “q” on a coordinate to coordinate basis in the filter window. That is, a filter coefficient set is created, which consists of filter coefficients calculated on a coordinate to coordinate basis in the filter window. A weighted-average filtering unit 83 uses the filter coefficient set to apply a weighted-average filtering operation on each pixel value in the filter window.

Подробнее
09-02-2012 дата публикации

Apparatus and method for augmented reality

Номер: US20120032977A1
Принадлежит: Bizmodeline Co Ltd

Disclosed is a method for augmented reality. A real world image including a marker is generated, the marker is detected from the real world image, an object image corresponding to the detected marker is combined with the real world image, and the combined image is displayed.

Подробнее
09-02-2012 дата публикации

Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image

Номер: US20120033032A1
Автор: Mikko Tapio Kankainen
Принадлежит: Nokia Oyj

An approach is provided for correlating and navigating between a live camera image and a prerecorded panoramic image. A mapping and augmented reality application correlates at least one live image with a prerecorded panoramic image, when a first location of a device used to capture the at least one live image substantially matches or falls within a predetermined proximity of a second location of a device used to capture the panoramic prerecorded image. The mapping and augmented reality application causes, at least in part, alternating of the at least one live image and the prerecorded panoramic image in a presentation on a screen of the device capturing the at least one live image.

Подробнее
01-03-2012 дата публикации

System for background subtraction with 3d camera

Номер: US20120051631A1
Принадлежит: University of Illinois

A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.

Подробнее
31-05-2012 дата публикации

Storage medium having stored thereon image processing program, image processing apparatus, image processing system, and image processing method

Номер: US20120133676A1
Автор: Shinji Kitahara
Принадлежит: Nintendo Co Ltd

At least one virtual object for which a predetermined color is set is placed in a virtual world. In a captured image captured by a real camera, at least one pixel corresponding to the predetermined color is detected, using color information including at least one selected from the group including RGB values, a hue, a saturation, and a brightness of each pixel of the captured image. When the pixel corresponding to the predetermined color has been detected, a predetermined process is performed on the virtual object for which the predetermined color is set. An image of the virtual world where at least the virtual object is placed is displayed on a display device.

Подробнее
07-06-2012 дата публикации

Method and system for testing closed caption content of video assets

Номер: US20120143606A1
Автор: Hung John Pham
Принадлежит: AT&T INTELLECTUAL PROPERTY I LP

A method and system for monitoring video assets provided by a multimedia content distribution network includes testing closed captions provided in output video signals. A video and audio portion of a video signal are acquired during a time period that a closed caption occurs. A first text string is extracted from a text portion of a video image, while a second text string is extracted from speech content in the audio portion. A degree of matching between the strings is evaluated based on a threshold to determine when a caption error occurs. Various operations may be performed when the caption error occurs, including logging caption error data and sending notifications of the caption error.

Подробнее
28-06-2012 дата публикации

Image processing apparatus, image processing method, and recording medium

Номер: US20120163712A1
Автор: Mitsuyasu Nakajima
Принадлежит: Casio Computer Co Ltd

Disclosed is an image processing apparatus including: an obtaining section which obtains a subject exiting image in which a subject and a background exist; a first specification section which specifies a plurality of image regions in the subject existing image; a comparison section which compares a representative color of each of the image regions with a predetermined color; a generation section which generates an extraction-use background color based on a comparison result of the comparison section; and a second specification section which specifies a subject constituent region constituting the subject and a background constituent region constituting the background in the subject existing image based on the extraction-use background color.

Подробнее
12-07-2012 дата публикации

Method and apparatus for annotating image in digital camera

Номер: US20120179676A1
Автор: Tae-Suh Park
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method and apparatus for annotating an image of a digital camera are disclosed. The method includes taking a photograph, transmitting a message including an image and an identifier of the photograph to a mobile terminal, receiving a response message including metadata and the identifier, determining an image file to be updated by using the identifier, and updating at least one metadata area of the image file with the metadata of the response message. The apparatus includes a message transmitting/receiving unit which transmits to a mobile device a message including an image and an identifier, and receiving a response message from the mobile terminal including metadata and the identifier. An annotation unit determines an image file to be updated based on the response message received from the message transmitting/ receiving unit by using the identifier.

Подробнее
09-08-2012 дата публикации

Signal processing apparatus

Номер: US20120201422A1
Автор: Nobuyuki Tsukamoto
Принадлежит: Canon Inc

A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object.

Подробнее
30-08-2012 дата публикации

Method and apparatus for composition of subtitles

Номер: US20120219266A1
Принадлежит: Thomson Licensing SAS

Embodiments of the invention include a subtitling format encompassing elements of enhanced syntax and semantic to provide improved animation capabilities. The disclosed elements improve subtitle performance without stressing the available subtitle bitrate. This will become essential for authoring content of high-end HDTV subtitles in pre-recorded format, which can be broadcast or stored on high capacity optical media, e.g. the Blue-ray Disc. Embodiments of the invention include abilities for improved authoring possibilities for the content production to animate subtitles. For subtitles that are separate from AV material, a method includes using one or more superimposed subtitle layers, and displaying only a selected part of the transferred subtitles at a time. Further, colors of a selected part of the displayed subtitles may be modified, e.g. highlighted.

Подробнее
06-12-2012 дата публикации

Imaging device, display method, and computer-readable recording medium

Номер: US20120307101A1
Принадлежит: Individual

An imaging device includes an imaging unit that images a subject and that continuously creates image data on the subject; a display unit that chronologically displays live view images associated with the image data created by the imaging unit; a face detector that detects the face of a subject included in the live view image; a trimming unit that creates a face image by cutting out, from the live view images, a face area including the face of the subject detected by the face detector; and a display controller that displays the face image created by the trimming unit at a display position specified in a display area of the display unit.

Подробнее
10-01-2013 дата публикации

Digital broadcast receiver and method for processing caption thereof

Номер: US20130014180A1
Автор: Tae Jin Park
Принадлежит: LG ELECTRONICS INC

A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal.

Подробнее
24-01-2013 дата публикации

Image processing apparatus, image processing method, and storage medium

Номер: US20130022291A1
Автор: Naoki Sumi
Принадлежит: Canon Inc

An image processing apparatus specifies, based on a reference image out of a plurality of images and a plurality of comparative images out of the plurality of images, a difference region, in each of the plurality of comparative images, including an object subjected to combination corresponding to a difference from a reference image, determines, based on a plurality of difference regions specified in the plurality of comparative images, an object region corresponding to an object included in the reference image, and combines, based on the determined object region in the reference image and the plurality of difference regions in the plurality of comparative images, with the reference image, the objects subjected to combination included in the plurality of difference regions so that an object corresponding to the object region is included in the reference image with which the plurality of difference regions are combined.

Подробнее
07-02-2013 дата публикации

System for enhancing video from a mobile camera

Номер: US20130033598A1
Принадлежит: Individual

A system for enhancing video can insert graphics, in perspective, into the video where the positions of the graphics are dependent on sensor measurements of the locations and/or attitudes of objects and the movable camera capturing the video.

Подробнее
07-02-2013 дата публикации

Method of image preview in a digital image pickup apparatus

Номер: US20130033615A1
Автор: Philippe Ecrement
Принадлежит: STMicroelectronics Grenoble 2 SAS

The present disclosure relates to a method of image preview in an image pickup apparatus. One embodiment is directed to a method that includes acquiring from an image sensor an image of a scene observed by an image sensor of the apparatus, generating a preview image obtained by applying to the acquired image a resolution reduction process to adapt it to the resolution of a display screen of a viewfinder of the image pickup apparatus, displaying the preview image on the display screen, generating an image of an area of the scene by extracting an area from the acquired image, and displaying the area image superimposed on the preview image or alternately with the preview image, the area image displayed having a resolution higher than that of the preview image and inferior or equal to that of the acquired image.

Подробнее
21-02-2013 дата публикации

Script-based video rendering

Номер: US20130044823A1
Принадлежит: Destiny Software Productions Inc

Systems and methods are provided for cross-platform rendering of video content on a user-computing platform that is one type of a plurality of different user-computing platform types. A script is transmitted to the user-computing platform and is interpreted by an application program compiled to operate on any one of the plurality of user-computing platform types. The script is configured to cause the script to be interpreted by the application program operating on the user-computing platform to: decode encoded video data received by the user-computing platform into decoded video data comprising a plurality of frame images; and render the decoded video data by displaying the frame images. Rendering the video data by displaying the frame images comprises alpha-blending at least one pair of frame images together.

Подробнее
28-02-2013 дата публикации

Multiple-input configuration and playback video enhancement

Номер: US20130050581A1
Принадлежит: Disney Enterprises Inc

A system and method for delaying a first version of a video feed from a video camera according to a first delay to generate a second version of the feed, supplying the first version to an insertion system, wherein the insertion system inserts an indicia into the first version to create an enhanced version with a second delay substantially matching the first delay and supplying the enhanced version and the second version to a production switcher, wherein the enhanced version and the second version supplied to the production switcher are synchronized with one another.

Подробнее
18-04-2013 дата публикации

METHOD AND APPARATUS FOR DISPLAYING VIDEO IMAGE

Номер: US20130093955A1
Автор: Wang Pulin
Принадлежит: Huawei Device Co.,Ltd.

A method for displaying a video image includes acquiring foreground information about a video image to be output, where the foreground information includes information that defines a size of a foreground picture. The method further includes determining an adjustment coefficient for the foreground picture according to the size of the foreground picture, a size and a resolution of a display device, and a preset adjustment rule. The preset adjustment rule indicates that the product of the adjustment coefficient for the foreground picture and a zooming multiple for display on the display device is equal to a fixed constant. The method also includes adjusting the video image to be output according to the adjustment coefficient for the foreground picture, and outputting, to the display device for display, the video image after adjustment. 1. A method for displaying a video image , comprising:acquiring foreground information related to a video image to be output, wherein the foreground information comprises information that defines a size of a foreground picture;determining an adjustment coefficient for the foreground picture according to the size of the foreground picture, a size and a resolution of a display device, and a preset adjustment rule, wherein the preset adjustment rule indicates that the product of the adjustment coefficient for the foreground picture and a zooming multiple for display on the display device is equal to a fixed constant;adjusting the video image to be output according to the adjustment coefficient for the foreground picture; andsubsequently outputting the video image to the display device for display.2. The method according to claim 1 , wherein before outputting the video image to the display device claim 1 , the method further comprises:cutting or filling the video image so that a size of the video image after adjustment is equal to a product of a size of the video image to be output and the fixed constant.3. The method according to claim 1 , ...

Подробнее
18-04-2013 дата публикации

Replacement of a Person or Object in an Image

Номер: US20130094780A1
Принадлежит: Hewlett Packard Development Co LP

Disclosed herein are a system and a method that use a background model to determine and to segment target content from an image and replace them with different content to provide a composite image. The background model can be generated based on image data representing images of a predetermined area that does not include traversing content. The background model is compared to image data representing a set of captured images of the predetermined area. Based on the comparison, portions of an image that differs from the background model are determined as the traversing content. A target content model is used to determine the target content in the traversing content. The target content determined in the images is replaced with different content to provide a composite image.

Подробнее
06-06-2013 дата публикации

Mobile terminal and control method thereof

Номер: US20130141551A1
Автор: Jihwan Kim
Принадлежит: LG ELECTRONICS INC

A mobile terminal which reproduces a video including image data and audio data and a control method thereof are provided. The mobile terminal which reproduces a video including image data and audio data includes a display unit configured to display an image corresponding to the image data based on a reproduction command with respect to the video, a subtitle processing unit configured to output subtitles related to the video along with the image to the display unit, and a controller configured to control the subtitle processing unit to change a display format of the subtitles according to sound characteristics of the audio data related to the image.

Подробнее
27-06-2013 дата публикации

Remote target viewing and control for image-capture device

Номер: US20130162844A1
Автор: Joseph I. Douek
Принадлежит: Individual

A system is made up of a remote access device with mechanisms to visualize images and configured to send and receive wireless signals from and to the remote access device. The system also includes an image capture device which has mechanisms to view an image of a target and is configured to send and receive signals from and to the image capture device. The system also includes a terminal coupling the remote access device and the image capture device to allow the image of the target as viewed by the image capture device to appear on the remote access device.

Подробнее
11-07-2013 дата публикации

Removing a Self Image From a Continuous Presence Video Image

Номер: US20130176379A1
Принадлежит: POLYCOM, INC.

Upon receiving a continuous presence video image, an endpoint of a videoconference may identify its self image and replace the self image with other video data, including an alternate video image from another endpoint or a background color. Embedded markers may be placed in a continuous presence video image corresponding to the endpoint. The embedded markers identify the location of the self image of the endpoint in the continuous presence video image. The embedded markers may be inserted by the endpoint or a multipoint control unit serving the endpoint. 1. A method that removes a self image of a first conferee from a continuous presence video image that will be presented to the first conferee of a continuous presence video conference , comprising:determining a location of the self image of the first conferee in the continuous presence video image; andreplacing the self image with other video data in the continuous presence video image that will be presented to the first conferee.2. The method of claim 1 , wherein determining the location of the self image of the first conferee in the continuous presence video image comprises:collecting information from the first conferee by marking a border of the self image of the first conferee in the continuous presence video image.3. The method of claim 2 , wherein marking a border of the self image of the first conferee in the continuous presence video image comprises:placing a cursor associated with a remote control device on the border.4. The method of claim 1 , wherein determining the location of the self image of the first conferee in the continuous presence video image comprises:receiving information from a multipoint control unit that controls the continuous presence video conference.5. The method of claim 1 , wherein replacing the self image comprises:replacing the self image in the continuous presence video image with a background color.6. The method of claim 1 , wherein replacing the self image comprises:replacing the ...

Подробнее
18-07-2013 дата публикации

VIDEO BACKGROUND INPAINTING

Номер: US20130182184A1
Автор: He Shan, Senlet Turgay
Принадлежит:

Several implementations provide inpainting solutions, and particular solutions provide spatial and temporal continuity. One particular implementation accesses first and second pictures that each include a representation of a background. A background value is determined for a pixel in an occluded area of the background in the first picture based on a source region in the first picture. A source region in the second picture is accessed that is related to the source region in the first picture. A background value is determined for a pixel in an occluded area of the background in the second picture using an algorithm that is based on the source region in the second picture. Another particular implementation displays a picture showing an occluded background region. Input is received that selects a fill portion and a source portion. An algorithm fills the fill portion based on the source portion, and display the resulting picture. 1. A method comprising:accessing a first picture including a first representation of a background, the first representation of the background having an occluded area in the first picture;determining a background value for one or more pixels in the occluded area in the first picture based on a source region in the first picture;accessing a second picture including a second representation of the background, the second representation being different from the first representation and having an occluded area in the second picture;determining a source region in the second picture that is related to the source region in the first picture; anddetermining a background value for one or more pixels in the occluded area in the second picture using an algorithm that is based on the source region in the second picture.2. The method of wherein:the first picture comprises a first mosaic formed by transforming one or more pictures from a sequence to a first common reference.3. The method of wherein:content from the transformed sequence is included into the first ...

Подробнее
25-07-2013 дата публикации

INTERACTIVE PHOTO BOOTH AND ASSOCIATED SYSTEMS AND METHODS

Номер: US20130188063A1
Автор: Cameron Brett W.
Принадлежит: Coinstar, Inc.

The present disclosure is directed to interactive photo booths and associated systems and methods. In one embodiment, for example, an interactive photo booth can include a housing having sidewalls that form an enclosure sized to receive one or more users. The interactive photo booth can also include a backdrop within the enclosure and a camera directed toward the backdrop that takes at least one photo of the users. The interactive photo booth can further include a first user interface in the enclosure and a second user interface on the sidewall of the housing. The first user interface can be configured to receive user selections related to the backdrop, and the second user interface can be configured to edit the photos taken by the camera. 1. A method of taking a photo in an interactive photo booth , the method comprising:providing a backdrop within the interactive photo booth, wherein the backdrop is configured to provide a plurality of virtual backgrounds;capturing at least one photo of a user in front of the backdrop in the interactive photo booth; andreceiving, via a user interface at the interactive photo booth, a plurality of user inputs to edit the photo of the user.2. The method of claim 1 , further comprising sending a digital verion of the edited photo to the user via text message and/or email.3. The method of claim 1 , further comprising uploading the edited photo to a remote database.4. The method of claim 1 , further comprising:receiving, via the user interface, login information for a user account on a social networking site; anduploading the edited photo to the social networking site.5. The method of claim 1 , further comprising sending a hyperlink to the user via text message and/or email claim 1 , wherein the hyperlink connects to a site including the edited photo.6. The method of claim 1 , further comprising receiving a user selection for a background scene claim 1 , wherein the selected background scene is provided on the backdrop via chroma key ...

Подробнее
25-07-2013 дата публикации

Combining multiple video streams

Номер: US20130188094A1
Принадлежит: Hewlett Packard Development Co LP

Methods, computer-readable media, and systems are provided for combining multiple video streams. One method for combining the multiple video streams includes extracting a sequence of media frames ( 224 - 1/224 - 2 ) from presenter ( 222 - 1 ) video and from shared digital rich media ( 222 - 2 ) video ( 340 ). The media frame ( 224 - 1/224 - 2 ) content is analyzed ( 226 ) to determine a set of space and time varying alpha values ( 228/342 ). A compositing operation ( 230 ) is performed to produce the combined video frames ( 232 ) based on the content analysis ( 226/344 ).

Подробнее
01-08-2013 дата публикации

Scene Background Blurring Including Range Measurement

Номер: US20130194375A1

Different distances of two or more objects in a scene being captured in a video conference are determined by determining a sharpest of two or more color channels and calculating distances based on the determining of the sharpest of the two or more color channels. At least one of the objects is identified as a foreground object or a background object, or one or more of each, based on the determining of the different distances. The technique involves blurring or otherwise rendering unclear at least one background object or one or more portions of the scene other than the at least one foreground object, or combinations thereof, also based on the determining of distances. 1. (canceled)2. A method of displaying a participant during a video conference against a blurred or otherwise unclear background , comprising:using an imaging device including an optic, an image sensor and a processor; performing an auto-focus sweep of the scene;', 'increasing a depth of field (DOF) including extending a delimit range of defocus distance over which a mean transfer function (MTF) is greater than 0.15 without decreasing aperture while maintaining a focus on said at least one foreground object at approximately said determined distance; and', 'generating a depth map of the scene based on the auto-focus sweep; and', 'identifying at least one of the objects as a foreground object or a background object, or one or more of each, based on the determining of the different distances; and', 'blurring or otherwise rendering unclear the background object., 'determining different distances of two or more objects in a scene being captured in video, including3. The method of claim 2 , further comprising detecting a face within the scene and designating the face as a foreground object.4. The method of claim 3 , further comprising enhancing an audio or visual parameter of the face claim 3 , or both.5. The method of claim 4 , further comprising enhancing loudness claim 4 , audio tone claim 4 , or sound ...

Подробнее
01-08-2013 дата публикации

IMAGING DEVICE

Номер: US20130194456A1
Автор: Abe Mitsuo
Принадлежит: Panasonic Corporation

An imaging device comprises a combination imaging mode. In combination imaging mode, recording-use image data is produced by capturing and combining a plurality of sets of image data. The imaging device further comprises a controller. The controller selects at least one set of image data from the plurality of sets of image data when it is determined in the combination imaging mode that the plurality of sets of image data are image data that do not satisfy a specific condition. The controller also produces the recording-use image data based on the one or more sets of image data. 1. An imaging device comprising:a combination imaging mode configured to capture and combine a plurality of sets of image data to produce recording-use image data in the combination imaging mode; and select at least one set of image data from among the plurality of sets of image data if it is determined that the plurality of sets of image data do not satisfy a specific condition in the combination imaging mode, and', 'to produce the recording-use image data based on at least one set of image data., 'a controller configured to2. The imaging device according to claim 1 , wherein:the combination imaging mode is further configured to estimate at least one set of image data; and 'select the at least one set of image data from among the plurality of sets of image data if the failure of the combination processing of the plurality of sets of image data is estimated in the combination imaging mode.', 'the controller is further configured to3. The imaging device according to claim 2 , wherein:the controller further includes a combination determination component configured to determine whether the combination processing of the plurality of sets of image data succeeds or fails based on the plurality of sets of image data or image data obtained by combining the plurality of sets of image data.4. The imaging device according to claim 3 , wherein: detect at least one edge of the plurality of sets of image ...

Подробнее
08-08-2013 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20130201366A1
Автор: OZAKI Koji
Принадлежит: SONY CORPORATION

There is provided an image processing apparatus including an extraction unit for extracting a first still portion in an image based on a plurality of still images, and for extracting at least a part of a second still portion corresponding to a portion that is not extracted as the first still portion in an image based on the plurality of still images and a combining unit for combining the first still portion and the second still portion to generate a combined image. 1. An image processing apparatus comprising:an extraction unit for extracting a first still portion in an image based on a plurality of still images, and for extracting at least a part of a second still portion corresponding to a portion that is not extracted as the first still portion in an image based on the plurality of still images; anda combining unit for combining the first still portion and the second still portion to generate a combined image.2. The image processing apparatus according to claim 1 , wherein the extraction unit extracts a still portion that contains a subject to be extracted as the first still portion.3. The image processing apparatus according to claim 2 , further comprising:a display control unit for causing the combined image to be displayed on a display screen.4. The image processing apparatus according to claim 3 , wherein the extraction unit extracts a new second still portion each time a number of the still images to be processed increases claim 3 ,wherein the combining unit, each time the second still portion is newly extracted, combines the first still portion or a previously generated combined image with the newly extracted second still portion to generate a new combined image, andwherein the display control unit, each time the combined image is generated, causes the generated combined image to be displayed on a display screen.5. The image processing apparatus according to claim 4 , further comprising:a recording processing unit for recording the combined image generated ...

Подробнее
12-09-2013 дата публикации

Content preparation systems and methods for interactive video systems

Номер: US20130236160A1
Принадлежит: Yoostar Entertainment Group Inc

Content preparation systems and methods are disclosed that generate scenes used by an interactive role performance system for inserting a user image as a character in the scene. Original media content from a variety of sources, such as movies, television, and commercials, can provide participants with a wide variety of scenes and roles. In some examples, the content preparation system removes an original character from the selected media content and recreates the background to enable an image of a user to be inserted therein. By recreating the background after removing the character, the user is given greater freedom to perform, as the image of the user can perform anywhere within the scene. Moreover, systems and methods can generate and store metadata associated with the modified media content that facilitates the combining of the modified media content and the user image to replace the removed character image.

Подробнее
03-10-2013 дата публикации

APPARATUS FOR GENERATING AN IMAGE WITH DEFOCUSED BACKGROUND AND METHOD THEREOF

Номер: US20130258138A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Provided are an apparatus and method for generating an image with a defocused background. According to various aspects, a preview image is used as the basis for extracting a background distribution and a defocused background is generated based on the extracted background distribution. Accordingly, it is not necessary to photograph two or more images to generate a defocused background effect. 1. An apparatus for generating an image with a defocused background , the apparatus comprising:a background distribution extraction unit configured to extract a background distribution based on a preview image corresponding to a photographed image;a defocused image generation unit configured to generate a defocused image for the photographed image; andan image combination unit for combining the defocused image with the photographed image based on the background distribution to generate the image with the defocused background.2. The apparatus of claim 1 , further comprising:a background segmentation unit configured to perform a binarization process on the background distribution to obtain a foreground portion and a background portion,wherein the background distribution indicates a probability distribution of a pixel of the photographed image belonging to the background, andthe image combination unit is configured to combine a background of the defocused image with a foreground of the photographed image based on the foreground portion and the background portion to generate the image with defocused background.3. The apparatus of claim 2 , further comprising:a smooth processing unit configured to perform a smoothing process on an edge of the foreground portion and the background portion to obtain a smooth background distribution,wherein the image combination unit combines the defocused image with the photographed image based on the smooth background distribution to generate the image with defocused background.4. The apparatus of claim 2 , wherein the background segmentation unit ...

Подробнее
10-10-2013 дата публикации

VIDEO COMMUNICATION DEVICE AND METHOD THEREOF

Номер: US20130265495A1
Автор: Yang Zhibing

The present invention discloses a video communication device and a method thereof, the method comprises: obtain a current battery energy level of a device; load information of the battery energy level to a ready-to-send video; execute video encoding to the ready-to-send video and sending the same. The present invention can display the information of the energy level of one party's device on the video communication image of another party's device, so that another party can realize the actually current battery energy level of one party's device, tempos of the video communication can be controlled well, effects of the video communication can be raised. 1. A video communication method comprising steps ofa. obtaining a current battery energy level of a device, setting a plurality of energy sections and a plurality of energy indicators corresponding to the respective energy sections, determining which one of the energy sections the current battery energy level is sited, and then obtaining a corresponding energy indicator thereof;b. selecting every frame of a ready-to-send video as a background frame, superimposing the corresponding energy indicator on the background frames in sequence, and selecting frames with an interval of a predetermined number of frames in the ready-to-send video as the background frames, superimposing the energy indicator on the background frames in sequence when the current battery energy level is lower than a threshold value; andc. executing video encoding to the ready-to-send video and sending the same.2. The method of claim 1 , wherein the threshold value is 5% of the total battery energy level and the predetermined number of frames is two frames.3. A video communication method comprising steps of:a. obtaining a current battery energy level of a device and a corresponding energy indicator according to the current battery energy level;b. loading the corresponding energy indicator to the ready-to-send video; andc. executing video encoding to the ...

Подробнее
17-10-2013 дата публикации

METHOD AND APPARATUS FOR CONCEALING PORTIONS OF A VIDEO SCREEN

Номер: US20130272678A1
Автор: BRYAN David Alan, SO Nicol
Принадлежит:

A simple, cost-effective, and robust method and system to obstruct crawls, logos, and other annoying and distracting images overlaid on a video signal and displayed on a TV set or monitor is provided. The method and system may detect the presence of the unwanted images and block them automatically, or they may accept manual input from the user via a handheld control device to block or obstruct these images. 1. A system for modifying video images , said system comprising:a processing apparatus receiving a received image having a first portion including undesirable content and a second portion;a control device in communication with said processing apparatus; anda processing element including a detector for detecting said first portion and an image generator generating a third portion having the same size, shape and position as said first portion;wherein said processing apparatus, said blocking element and said control device cooperate to display a processed image composed of said third and said second portions, whereby the processed image does not include said undesirable content.2. The system of wherein said third image is a dynamic image that changes in accordance with a parameter associated with said received image.3. The system of wherein said third portion consists of a preselected image.4. The system of wherein said first portion includes alphanumeric characters.5. The system of wherein said third portion is a static image.6. The system of wherein said first portion is one of a text and a logo.7. The system of wherein said first portion includes a scrolling text.8. The system of wherein said first portion is deleted and replaced by the third portion in said processed image.9. In a system in which video images are transmitted to a receiver as input images claim 1 , wherein said receiver is associated with a user interface having a user-activated key and a processor claim 1 , each input image including a first part with undesirable content claim 1 , a method of ...

Подробнее
14-11-2013 дата публикации

DIGITAL BROADCAST RECEIVER AND METHOD FOR PROCESSING CAPTION THEREOF

Номер: US20130300932A1
Автор: PARK Tae Jin
Принадлежит:

A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal. 1. A method of transmitting a digital broadcast signal , the method comprising:multiplexing video data, audio data, and supplementary data into MPEG2 transport stream, the supplementary data including an event information table (EIT) and a program map table (PMT),wherein the EIT or the PMT includes a caption service descriptor,{'b': 21', '708, 'wherein the caption service descriptor includes caption information indicating whether a digital television closed caption service is present in the video data or a line closed caption service is present in the video data in accordance with electronic industry association (EIA) ,'}{'b': 708', '21', '708, 'wherein the caption information is set when the digital television closed caption service is present in accordance with EIA and the caption information is clear when the line closed caption service is present in accordance with EIA ,'}{'b': '708', 'wherein the caption service descriptor further includes a caption service number that is defined only when the digital television closed caption service in accordance with electronic industry association (EIA) is present; and'}transmitting the digital broadcast signal.2. ...

Подробнее
14-11-2013 дата публикации

Display control method, recording medium, display control device

Номер: US20130302014A1
Автор: Kouichi Uchimura
Принадлежит: Sony Corp

The present technology relates to a display control method, a recording medium, and a display control device with which a subtitle forced display function can be implemented on the basis of TTML (Timed Text Markup Language). TTML data in which predetermined attribute information pertaining to subtitle forced display is described in a tag defining an element of text is used. At a content playback side, control is performed on the basis of the predetermined attribute information in the TTML data, in such a way that characters based on text data designated by the tag in which the attribute information is written are displayed on a display unit regardless of whether a subtitle display setting is ON or OFF. Owing to this kind of configuration, text data serving as a predetermined text element from among text elements (text data serving as subtitles) within the TTML data can be displayed regardless of whether a subtitle display setting is ON or OFF. In other words, a subtitle forced display function can be implemented on the basis of TTML.

Подробнее
26-12-2013 дата публикации

Image processing method and image processing apparatus for performing defocus operation according to image alignment related information

Номер: US20130342735A1
Принадлежит: MediaTek Inc

An image processing method includes: receiving a plurality of input images; deriving an image alignment related information from performing an image alignment upon the input images; and generating a processed image by performing a defocus operation upon a selected image selected from the input images according to the image alignment related information. For example, the image processing method may be employed by an electronic device such as a mobile device. Thus, the mobile device may capture two or more images to generate the defocus visual effect, which is similar to professional long-focus lens.

Подробнее
06-01-2022 дата публикации

SCREEN RECORDING METHOD AND APPARATUS, AND ELECTRONIC DEVICE

Номер: US20220006971A1
Автор: LIU Haoyu, Zhou Jianxun
Принадлежит:

A screen recording method includes obtaining operation data of an electronic device and response data generated by the electronic device based on the operation data; obtaining a virtual device image matching the electronic device; and associating the operation data and the response data with the virtual device image according to a pre-determined processing strategy to generate a first video. 1. A screen recording method , comprising:obtaining operation data of an electronic device and response data generated by the electronic device based on the operation data;obtaining a virtual device image matching the electronic device; andassociating the operation data and the response data with the virtual device image according to a pre-determined processing strategy to generate a first video.2. The method according to claim 1 , wherein obtaining the operation data of the electronic device includes:obtaining the operation data of at least one of the electronic device itself, a component of the electronic device, or an associated device of the electronic device, the associated device being a device that causes the electronic device to generate the response data based on the operation data.3. The method according to claim 2 , wherein obtaining the virtual device image matching the electronic device includes:determining whether a device that generates the operation data is the associated device of the electronic device through a source of the operation data;when the associated device generates the operation data, obtaining parameter information of the associated device; andbased on the parameter information, generating the virtual device image of the associated device.4. The method according to claim 1 , wherein associating the operation data and the response data with the virtual device image according to the pre-determined processing strategy to generate the first video includes:associating a first sub-video generated from the operation data and a second sub-video generated ...

Подробнее
04-01-2018 дата публикации

Image capture device with contemporaneous image correction mechanism

Номер: US20180005068A1
Принадлежит:

A hand-held or otherwise portable or spatial or temporal performance-based image capture device includes one or more lenses, an aperture and a main sensor for capturing an original main image. A secondary sensor and optical system are for capturing a reference image that has temporal and spatial overlap with the original image. The device performs an image processing method including capturing the main image with the main sensor and the reference image with the secondary sensor, and utilizing information from the reference image to enhance the main image. The main and secondary sensors are contained together within a housing. 1. A computer-implemented method comprising:receiving a plurality of images of nominally the same scene;determining whether a particular image, of the plurality of images, comprises a region that includes sub-optimal depictions of one or more eyes; [ corresponds to the particular region in the particular image, and', 'does not comprise the sub-optimal depiction of one or more eyes; and, 'selecting, from the plurality of images, a particular different image that comprises a certain region that, 'generating a combination image based, at least in part, on the particular image and the particular different image;, 'in response to determining that the particular image comprises a particular region that includes sub-optimal depictions of one or more eyeswherein the method is performed by one or more processors of a computing device configured as an optical system.2. The method of claim 1 , wherein one or more images of the plurality of images are obtained using one or more infra-red sensors.3. The method of claim 1 , wherein the determining whether the particular image comprises a region that includes sub-optimal depictions of one or more eyes comprises determining whether the particular image comprises a region that includes a depiction of a blinking-eye claim 1 , and if so claim 1 , determining a degree to which the blinking-eye is shut.4. The ...

Подробнее
07-01-2021 дата публикации

Methods and Devices for Electronically Altering Captured Images

Номер: US20210004981A1
Принадлежит:

An electronic device includes a display and an image capture device electronically capturing one or more images of a subject performing an activity. A wireless communication device can electronically transmit the one or more images to a remote electronic device across a network after a Procrustes superimposition operation is performed to compare the subject to a standard. The wireless communication device can electronically receive one or more electronically altered images identifying differences between one or more standard reference locations situated at one or more predefined features of the standard performing the activity and one or more corresponding subject reference locations situated at one or more predefined features of the subject performing the activity. These electronically altered images can be presented on the display of the electronic device to provide corrective feedback to the subject as how to better perform the activity. 1. A method in an electronic device , the method comprising:identifying, with one or more processors of the electronic device, a plurality of subject reference locations situated at predefined features of a subject depicted performing an activity in one or more electronically captured images;retrieving, with the one or more processors from a memory of the electronic device, one or more electronic images of a standard depicted performing the activity;identifying, with the one or more processors, a plurality of standard reference locations, corresponding to the plurality of subject reference locations on a one-to-one basis, and situated at predefined features of the standard depicted in the one or more electronic images;performing, with the one or more processors, a Procrustes superimposition operation one the one or more electronic images to superimpose a representation of the standard upon the subject in the one or more electronically captured images;comparing, with the one or more processors, each standard reference location of ...

Подробнее
02-01-2020 дата публикации

ANALYZING 2D MOVEMENT IN COMPARISON WITH 3D AVATAR

Номер: US20200005544A1
Автор: Kim Sang J.
Принадлежит:

A processing device receive a two dimensional (2D) video recording of a subject user performing a physical activity and provides a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity. The processing device causes display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity, receives first user input to advance the 2D video recording to a first position corresponding the first key point, and receives second user input comprising a first synchronization command. In response, the processing device generates a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point. 1. A method comprising:receiving a two dimensional (2D) video recording of a subject user performing a physical activity;providing a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity;causing display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity;receiving first user input to advance the 2D video recording to a first position corresponding the first key point;receiving second user input comprising a first synchronization command; andgenerating, by a processing device, a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point.2. The method of claim 1 , wherein the 3D visualization is based on 3D motion capture data corresponding to one or more target users performing the physical activity.3. The method of claim 2 , wherein the 3D motion capture data comprises one or more of positional data claim 2 , rotational data claim 2 , or acceleration data measured by a plurality of motion capture sensors.4. The method of claim 2 , wherein the one or more target users share one or more attributes with the subject user.5. The method of claim 4 , wherein the one or more ...

Подробнее
02-01-2020 дата публикации

SYSTEMS AND METHODS FOR PROCESSING DIGITAL VIDEO

Номер: US20200005831A1
Принадлежит:

A computer-implemented method of processing digital video includes, for each of a plurality of selected frames of the digital video: subjecting image data in the frame to scaling to occupy an image region that is smaller than the frame thereby to form at least one non-image region between the image region and the frame boundary; and inserting non-image data into at least one non-image region. A computer-implemented method of processing digital video includes, for each of a plurality of selected frames of the digital video: processing contents occupying one or more predetermined non-image regions of the frame to extract non-image data therefrom; and subjecting an image region of the frame to mapping to expand the image region to a displayable size. Systems and computer-readable media are also disclosed. 1. A computer-implemented method of processing digital video , the method comprising:for each of a plurality of selected frames of the digital video:subjecting image data in the frame to scaling to occupy an image region that is smaller than the frame thereby to form at least one non-image region between the image region and the frame boundary; andinserting non-image data into at least one non-image region formed by the scaling, the inserted non-image data being machine-readable for frame-accurate event-triggering.2. The computer-implemented method of claim 1 , wherein the non-image data comprises:a frame identifier uniquely identifying each of the at least one selected frame.3. The computer-implemented method of claim 1 , wherein the non-image data comprises:at least one instruction for a media player.4. The computer-implemented method of claim 3 , wherein the at least one instruction comprises:an instruction for the media player to execute an event when the media player is displaying the selected frame.5. The computer-implemented method of claim 4 , wherein the event comprises:a forced perspective wherein, beginning with the selected frame, the view is forced to a ...

Подробнее
05-01-2017 дата публикации

Device, system and method for multi-point focus

Номер: US20170006212A1
Принадлежит: Hon Hai Precision Industry Co Ltd

An electronic device achieving multi-point focus of a scene includes a digital camera, a depth-sensing camera, at least one processor, a storage device, a display device, and a multi-point focus system. The system receives one or more points designated by a user from an image of a scene previewed by the digital camera, and analyzes the one or more designated objects to be focused. Distances between the digital camera and each designated object to be focused are determined and the digital camera adjusts a focal length according to each distance. Images of the same scene at each focal length are captured and the images captured by the digital camera are processed to generate a new image which includes all of focus objects. The new image is output through the display device.

Подробнее
07-01-2016 дата публикации

Method and apparatus for providing virtural processing effects for wide-angle video images

Номер: US20160006933A1
Принадлежит: Sony Corp

A system and method for capturing and presenting immersive video presentations is described. A variety of different implementations are disclosed including multiple stream pay-per-view, sporting event coverage and 3D image modeling from the immersive video presentations.

Подробнее
07-01-2016 дата публикации

Method and Apparatus for Supporting Image Processing, and Computer-Readable Recording Medium for Executing the Method

Номер: US20160006949A1
Автор: Ahn Jaihyun, Kim Daesung
Принадлежит:

A support for image processing is provided, comprising: (a) detecting respective face regions from images consecutively photographed for a first person at predetermined time intervals by an image pickup unit to display images of the face regions detected in relation to the first person in a first region of a screen, and providing a user interface for indicating that a specific face image is selected from the face images of the first person displayed in the first region; (b) additionally displaying the specific face image through a second region adjacent to the first region; and (c) displaying a synthesized image using the specific face image as a representative face of the first person, when the specific face image displayed through the second region is selected. 133-. (canceled)34. A machine readable medium having instructions stored thereon , which when executed by one or more machines , cause the machines to:detect a face region of a person in a plurality of photographed images;display two or more face images of the person in a region of a screen, wherein the two or more face images correspond to a face region detected in the plurality of photographed images;detect a selection of a face image from among the two or more face images; anddisplay a synthesized image using the selected face image as a face of the person.35. The machine readable medium according to claim 34 , wherein the synthesized image comprises a combination of the selected face image and a base image.36. The machine readable medium according to claim 34 , having instructions stored thereon claim 34 , which when executed by one or more machines claim 34 , further cause the machines to display an indicator with the selected face image at the region of the screen.37. The machine readable medium according to claim 34 , wherein the detected face image for the person is displayed at a second region of the screen.38. The machine readable medium according to claim 37 , having instructions stored thereon ...

Подробнее
04-01-2018 дата публикации

Live Teleporting System and Apparatus

Номер: US20180007314A1
Принадлежит:

A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image. 1. A method of producing a Pepper's Ghost , the method comprising:projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature; andilluminating the object with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image.2. The method of wherein the subject in the virtual image has a luminance and illuminating the object comprises illuminating the object with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.3. A method of producing a Pepper's Ghost claim 1 , the method comprising:projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a luminance; andilluminating the object with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.4. The method of wherein the subject in the virtual image has a colour ...

Подробнее
20-01-2022 дата публикации

Objective-Based Control Of An Autonomous Unmanned Aerial Vehicle

Номер: US20220019248A1
Принадлежит: Skydio, Inc.

A technique is described for controlling an autonomous vehicle such as an unmanned aerial vehicle (UAV) using objective-based inputs. In an embodiment, the underlying functionality of an autonomous navigation system is via an application programming interface (API). In such an embodiment, the UAV can be controlled trough specifying a behavioral objective, for example, using a call to the API to set parameters for the behavioral objective. The autonomous navigation system can then incorporate perception inputs such as sensor data from sensors mounted to the UAV and the set parameters using a multi-objective motion planning process to generate a proposed trajectory that most closely satisfies the behavioral objective in view of certain constraints. In some embodiments, developers can utilize the API to build customized applications for utilizing the UAV to capture images. Such applications, also referred to as “skills,” can be developed, shared, and executed to control the behavior of an autonomous UAV and to aid in overall system improvement. 1. A method for autonomous control of an unmanned aerial vehicle (UAV) through a physical environment using behavioral objectives defined via a navigation application programming interface (API) , the method comprising:receiving, by a computer system, sensor data from a sensor onboard the UAV;receiving, by the computer system, via the API, information indicative of a behavioral objective; andgenerating, by the computer system, control commands configured to cause the UAV to autonomously maneuver through the physical environment based on the sensor data and the information indicative of the behavioral objective.2. The method of claim 1 , wherein the behavioral objective is any of a navigation objective or an image capture objective.3. The method of claim 1 , wherein the behavioral objective is defined relative to any of the physical environment claim 1 , the UAV claim 1 , a physical object located in the physical environment ...

Подробнее
03-01-2019 дата публикации

METHODS, SYSTEMS, AND PRODUCTS FOR TELEPRESENCE VISUALIZATIONS

Номер: US20190007628A1
Автор: Oetting John
Принадлежит:

Methods, systems, and products generate telepresence visualizations for a remote participant to a videoconference. A central server superimposes the remote participant onto images or video of the teleconferencing environment. The central server thus generates an illusion that the remote participant is in the same conferencing environment as other conferees. 1. A method , comprising:receiving, by a server, a location identifier for a videoconferencing environment;retrieving, by the server, a cached point cloud map of sensor data of the videoconferencing environment and a cached image of the videoconferencing environment, the cached point cloud map of sensor data captured by one or more room sensors present in the videoconferencing environment, the cached image of the videoconferencing environment captured by one or more image capture devices within the videoconferencing environment;receiving, by the server, participant video of a remote participant of the video conference, wherein the remote participant is situated in a location that is physically separate from the videoconferencing environment;removing, by the server, a background portion of the participant video, wherein removing the background portion of the participant video results in a foreground portion of the participant video of the remote participant;superimposing the foreground portion of the participant video onto the cached image of the videoconferencing environment using the cached point cloud map of sensor data of the video conferencing environment, wherein the superimposing generates composite video of the remote participant; andtransmitting the composite video for display.2. The method of claim 1 , further comprising:receiving, by the server, a differential update point cloud map of sensor data of the videoconferencing environment captured by one or more room sensors present in the videoconferencing environment;wherein superimposing the foreground portion of the participant video onto the image of ...

Подробнее
02-01-2020 дата публикации

METHOD AND SYSTEM FOR ENCODING VIDEO WITH OVERLAY

Номер: US20200007883A1
Автор: Toresson Alexander
Принадлежит: AXIS AB

Encoding video data comprises receiving an image sequence comprising first and second input image frames, adding an overlay, thereby generating first and second generated image frames, and encoding a video stream containing output image frames with and without overlay. The first input image frame is encoded as an intra-frame to form a first output image frame. The second input image frame is encoded as an inter-frame with reference to the first output image frame to form a second output image frame. The generated image frames are encoded as inter-frames with reference to the first and second output image frames to form first and second overlaid output image frames. A first part of the second generated image frame is encoded with reference to the first overlaid output image frame, and a second part of the second generated image frame is encoded with reference to the second output image frame. 1. A method of encoding video data , comprising:receiving an image sequence comprising a first input image frame and a second input image frame,receiving an overlay to be applied to the image sequence, the overlay comprising a picture element and spatial coordinates for positioning the picture element in the first and second input image frames,adding the picture element to the first and second input image frames in accordance with the spatial coordinates, thereby generating an overlaid image sequence comprising a first generated image frame and a second generated image frame,encoding a video stream containing output image frames without overlay and corresponding output image frames with overlay, wherein:the first input image frame is encoded as an intra-frame to form a first output image frame,the second input image frame is encoded as an inter-frame with reference to the first output image frame to form a second output image frame,the first generated image frame is encoded as an inter-frame with reference to the first output image frame to form a first overlaid output image ...

Подробнее
09-01-2020 дата публикации

SYSTEM AND METHOD FOR NONINVASIVE MEASUREMENT OF CENTRAL VENOUS PRESSURE

Номер: US20200008684A1
Автор: Feinberg Jack Leonard
Принадлежит:

A non-invasive method of calculating the central venous pressure (CVP) of a patient may include analysis of video of the neck region of the patient. Filters, which may include spatial filters and/or temporal filters, may be applied to the video to enhance the visibility of small movements, which may be due to circulatory pulsations of the patient. The video may be modified to highlight such movements, and motion indicative of venous pulsation may be distinctly identified and highlighted. 1. A system for processing video data by a processing device to provide overlay image data enhancing the display of venous pulsations of a patient , the system comprising a processor and a memory , the system configured to:receive a real-time video stream comprising image data of a patient;apply a filter to the video stream to facilitate detection of changes between frames of the image data indicative of circulatory pulsations of the patient; andoutputting a modified video stream substantially contemporaneously with the reception of the video stream, the modified video stream comprising overlay image data indicative of circulatory pulsations of the patient.2. The system of claim 1 , wherein applying a filter to the video stream comprises applying a temporal bandpass filter to the video stream.3. The system of claim 1 , wherein applying a filter to the video stream comprises applying a spatial filter to the video stream prior to applying a temporal filter.4. The system of claim 1 , wherein the system is further configured to receive a cardiac signal claim 1 , the cardiac signal indicative of cardiac activity of the patient during capture of the video stream claim 1 , wherein the overlay image data is generated at least in part on the received cardiac signal.5. The system of claim 4 , wherein the system is further configured to identify at least one venous-pulsation time window and at least one arterial-pulsation time window based on the cardiac signal claim 4 , wherein the overlay ...

Подробнее
11-01-2018 дата публикации

SEWING MACHINE, STITCHING PATTERN DISPLAY METHOD, AND RECORDING MEDIUM FOR STORING PROGRAM

Номер: US20180010276A1
Автор: KONGO Takeshi
Принадлежит:

A sewing machine includes a first image acquisition unit, a second image acquisition unit, and a display unit. In the sewing machine, the first image acquisition performs image acquisition from the front face side of a cloth for a stitching pattern formed in the cloth. The second image acquisition unit performs image acquisition from the back face side of the cloth such that the center position of the acquired image matches the center position of the image of the stitching pattern acquired by the first image acquisition unit. The display unit displays, on the same screen, a first stitching pattern video image thus acquired by the first image acquisition unit and a second stitching pattern video image thus acquired by the second image acquisition unit. 1. A sewing machine comprising:a first image acquisition unit that performs image acquisition from a front face side of a cloth for a stitching pattern formed in the cloth;a second image acquisition unit that performs image acquisition from a back face side of the cloth such that a center position of an acquired image matches a center position of an image of the stitching pattern acquired by the first image acquisition unit; anda display unit that displays, on a single screen, a first stitching pattern video image thus acquired by the first image acquisition unit and a second stitching pattern video image thus acquired by the second image acquisition unit.2. The sewing machine according to claim 1 , wherein the display unit displays the first stitching pattern video image and the second stitching pattern video image in the form of a superimposed image.3. The sewing machine according to claim 1 , wherein the display unit displays the first stitching pattern video image and the second stitching pattern video image such that they are side-by-side.4. The sewing machine according to claim 1 , wherein the display unit displays a first mark at a position a predetermined distance away from a center position on the first ...

Подробнее
27-01-2022 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM

Номер: US20220030178A1
Принадлежит: Sony Group Corporation

Provision of a sense of distance by motion parallax and provision of various visual fields are satisfactorily realized. 1. An image processing apparatus comprising:a processing unit configured to generate a display image by superimposing a vehicle interior image on a captured image obtained by capturing an image on a rear side from a vehicle, whereinthe processing unit generates the display image on a basis of setting information regarding a reference visual field, andthe image processing apparatus further comprises:a setting unit configured to set the reference visual field.2. The image processing apparatus according to claim 1 , whereinas the reference visual field setting, a display position setting is included.3. The image processing apparatus according to claim 1 , whereinas the reference visual field setting, a display size setting is included.4. The image processing apparatus according to claim 1 , whereinas the reference visual field setting, a compression setting of all or part in a horizontal direction is included.5. The image processing apparatus according to claim 1 , whereinas the reference visual field setting, a compression setting of all or part in a vertical direction is included.6. The image processing apparatus according to claim 1 , whereinthe processing unit uses, as a captured image obtained by capturing an image on a rear side from the vehicle, a captured image captured by an image capturing device attached to a rear part of the vehicle and a captured image captured by an image capturing device attached to a side part of the vehicle.7. The image processing apparatus according to claim 1 , further comprising:a selection unit configured to select any reference visual field setting from a plurality of the reference visual field settings, whereinthe processing unit generates the display image on a basis of the selected reference visual field setting.8. The image processing apparatus according to claim 1 , whereinthe vehicle interior image includes ...

Подробнее
27-01-2022 дата публикации

MULTILAYER THREE-DIMENSIONAL PRESENTATION

Номер: US20220030179A1
Автор: Kundu Malay
Принадлежит:

The present disclosure provides a system and method for creating at multilayer scene using a multiple visual input data. And injecting an image of an actor into the multilayer scene to produce a output video approximating a three-dimensional space which signifies depth by visualizing the actor in front of some layers and behind others. This is very useful for many situations where the actor needs to be on a display with other visual items but in a way that does not overlap or occlude those items. A user interacts with other virtual objects or items in a scene or even with other users visualized in the scene. 1. A computer-implemented method for generating a multilayer scene , the method comprising:receiving a video stream data of an actor, using an imaging unit, wherein the video stream data captures the actor at least partially;isolating the actor from the video stream data, wherein the isolated actor is positioned at an actor layer of the multilayer scene;identifying at least two layers of images from one or more input data, wherein the at least two layers of images are positioned at two different depth positions within the multilayer scene; anddisplaying the multilayer scene on a display unit, wherein the multilayer scene comprises the actor layer positioned in front of one of the at least two layers of images and behind the other one of the at least two layers of images.2. The method of claim 1 , wherein the step of identifying at least two layers of images includes receiving a multilayer input data as the one or more input data claim 1 , the multilayer input data including the at least two layers of images.3. The method of claim 1 , wherein the step of identifying at least two layers of images includes receiving a composite input data as the one or more input data claim 1 , the at least two layers of images being extracted from the composite input data.4. The method of claim 1 , wherein the step of identifying at least two layers of images includes identifying ...

Подробнее
14-01-2016 дата публикации

COMPOSITE IMAGE GENERATION TO REMOVE OBSCURING OBJECTS

Номер: US20160012574A1
Автор: Fang Jun, LI DAQI
Принадлежит:

Technologies are generally described for methods and systems effective to generate a composite image. The methods may include receiving first image data that includes object data corresponding to an object and receiving second image data that includes obscuring data. The obscuring data, if displayed on a display, may obscure at least a portion of the. The methods may also include identifying a first region that may include the object data, in the first image data. The methods may also include identifying a second region, that may include the obscuring data, in the second image data. The methods may also include replacing at least part of the second region with at least part of the first region to generate the composite image data that may include at least some of the object data. The methods may also include displaying the composite image on a display. 1. A method to generate a composite image , the method comprising , by a first device:receiving, from a second device, first image data that includes object data, wherein the object data corresponds to an object;receiving second image data that includes obscuring data, wherein the obscuring data corresponds to at least a part of the second device, the obscuring data, if displayed on a display, would obscure at least a portion of the object;identifying a first region in the first image data, wherein the first region includes the object data;identifying a second region in the second image data, wherein the second region includes the obscuring data;replacing at least part of the second region in the second image data with at least part of the first region, to generate the composite image data, where the composite image data includes at least some of the object data; anddisplaying the composite image on a display.2. The method of claim 1 , wherein the first device includes a vehicle and the display is inside the vehicle.3. The method of claim 1 , wherein the first device includes a first vehicle claim 1 , and the second ...

Подробнее
14-01-2016 дата публикации

THREE-DIMENSIONAL IMAGE OUTPUT DEVICE AND BACKGROUND IMAGE GENERATION DEVICE

Номер: US20160012627A1
Принадлежит:

A projection (projected image) is drawn by perspective projection of a three-dimensional model with a background image having improved reality. When a sightline of the perspective projection looks down from above, the projected image is drawn into an object drawing area which is a lower part of an image picture. A background layer representing the stratosphere is separately generated by two-dimensionally drawing a background image, in which the stratosphere (hatched area) is opaque, while the remaining area is transparent. The boundary between the opaque portion and the transparent portion forms a curved line that is convex upward to express a curved horizon. The background layer is superimposed in front of the projected image, not behind the projected image, thereby covering an upper edge portion of the projected image including a straight-lined upper edge, so as to provide a curved boundary realizing a curved pseudo horizon in the image picture. 1. A three-dimensional image output device that outputs a three-dimensional image in which an object is drawn three-dimensionally , the three-dimensional image output device comprising:a three-dimensional model storage that stores a three-dimensional model representing a three-dimensional shape of the object;a projecting section that uses the three-dimensional model and generates a three-dimensional object image that expresses the object three-dimensionally;a background layer generating section that generates a background layer, in which a background image of the three-dimensional image is drawn to have a transparent portion and an opaque portion; andan image output controller that superimposes the background layer on a front surface of the three-dimensional object image to generate the three-dimensional image and outputs the three-dimensional image, whereinat least one of a generating condition of the three-dimensional object image and a generating condition of the background layer is adjusted to cause the opaque portion ...

Подробнее
12-01-2017 дата публикации

PORTABLE VIDEO COMMUNICATION SYSTEM

Номер: US20170013197A1
Принадлежит:

A method and device for adapting a display image on a hand-held portable wireless display and digital capture device. The device includes a camera for capturing a digital video and/or still image of a user, means for adjusting the captured digital image in response to poor image capture angle of said image capture device so as to create a modified captured digital image; and means for transmitting said modified captured digital image over a wireless communication network to a second hand-held portable wireless display and digital capture device. 1. A method comprising:capturing digital video images by a digital capture device having a display feature;determining that at least some of the captured images are of poor quality caused, at least in part, by jitter or unstable motion of the digital capture device by performing at least one or both of an analysis of the digital images being captured and detecting motion of the digital capture device;automatically adjusting one or more of the digital images that were determined to be of poor quality due to jitter or unstable motion of the digital capture device; andpresenting a verification image on the display feature, wherein the verification image is configured to provide visual verification as to what the one or more adjusted digital images look like.2. The method of claim 1 , wherein automatically adjusting the one or more digital images that were determined to be of poor quality caused claim 1 , at least in part claim 1 , by jitter or unstable motion of the digital capture device comprises adjusting an allowed image capture area of the digital image to remove at least a portion of a background of the digital image.3. The method of claim 2 , further comprising storing the removed portion of the background of the digital image.4. The method of claim 1 , wherein the visual verification comprises presentation of face location claim 1 , background claim 1 , zoom claim 1 , brightness claim 1 , pointing claim 1 , and privacy ...

Подробнее
09-01-2020 дата публикации

RING SIZE MEASUREMENT SYSTEM AND METHOD FOR DIGITALLY MEASURING RING SIZE

Номер: US20200013182A1
Автор: Sompura Mehul
Принадлежит:

A ring size measuring system to digitally measure a ring size of a user's finger including an image capturing device configured to capture a digital image of the user's finger, one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a digital image of a user's hand using the image capturing device, determining a distance between the image capturing device and the user's hand, and defining at least one-dimension point pair of a selected finger from the received digital image, wherein the one or more processors calculate a distance between the dimension point pair to calculate a diameter of the selected finger. 1. A ring size measuring system to digitally measure a ring size of a user's finger , the system comprising:an image capturing device configured to capture a digital image of the user's finger;one or more processors;memory; andone or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:receiving a digital image of a user's hand using the image capturing device;determining a distance between the image capturing device and the user's hand; anddefining at least one-dimension point pair of a selected finger from the received digital image,wherein the one or more processors use the determined distance between the image capturing device and the user's hand and a distance between the dimension point pair to calculate a diameter of the selected finger.2. The ring size measuring system of claim 1 , wherein the one or more programs further includes displaying a ring size corresponding to the calculated diameter using a data set including a diameter to ring size conversion chart of the selected finger on a display screen.3. The ring size measuring system of ...

Подробнее
09-01-2020 дата публикации

DYNAMIC CONTENT PROVIDING METHOD AND SYSTEM FOR FACE RECOGNITION CAMERA

Номер: US20200013195A1
Принадлежит: Snow Corporation

A dynamic content providing method performed by a computer-implemented dynamic content providing system including recognizing a facial region in an input image, extracting feature information of the recognized facial region, and dynamically synthesizing an image object of content based on the feature information, the content being synthesizable with the input image may be provided. 1. A dynamic content providing method performed by a computer-implemented dynamic content providing system , the method comprising:recognizing a facial region in an input image;extracting feature information of the recognized facial region; anddynamically synthesizing an image object of content based on the feature information, the content being synthesizable with the input image.2. The method of claim 1 , wherein the extracting comprises extracting the feature information by calculating face ratio data based on the recognized facial region.3. The method of claim 1 , wherein the extracting comprises:calculating face ratio data based on the facial region;comparing the face ratio data to reference ratio data; andextracting the feature information based on a result of the comparing.4. The method of claim 1 , wherein the extracting comprises:in response to at least two facial regions being recognized in the input image,calculating face ratio data with respect to each of the facial regions;comparing the face ratio data between the facial regions; andextracting the feature information with respect to each of the at least two facial regions based on a result of the comparing.5. The method of claim 1 , wherein the dynamically synthesizing comprises:synthesizing the content with the input image; andproviding a different synthesis result with respect to the content based on the feature information in the input image.6. The method of claim 1 , wherein the dynamically synthesizing comprises synthesizing the image object of the content at a position corresponding to the feature information in the ...

Подробнее
15-01-2015 дата публикации

Method and Apparatus for Image Content Detection and Image Content Replacement System

Номер: US20150015743A1
Автор: Rantalainen Erkki
Принадлежит:

A subject (), such as a billboard, has a filtering film () to absorb electromagnetic radiation specifically in a first wavelength band. A detector () provides a first detector signal () relating to the first wavelength band and a second detector signal () relating to another, different, second wavelength band, respectively. Suitably, the subject () appears with high intensity in one band and with low intensity in the other. A content replacement unit () produces a mask signal () by identifying regions of contrast between the first and second detector signals () as target areas (). A content substitution unit () selectively replaces the target areas () with alternate image content () to generate modified video images (). The system is useful, for example, to generate multiple live television broadcasts each having differing billboard advertisements. 1. An image content replacement system , comprising:a subject including a filter which absorbs radiation specifically in a first wavelength band;at least one detector unit which observes a scene including the subject to provide a first detector signal relating to the first wavelength band and a second detector signal relating to a second wavelength band which is distinct from the first wavelength band; anda content replacement unit which generates a mask signal derived from regions of contrast between the first and second detector signals, the mask signal identifying one or more target areas within video images observing the subject for replacement with alternate content.2. The system of claim 1 , wherein the content replacement unit comprises:a video image receiving unit which receives the video images observing the subject;a detector signal processing unit which processes the first and second detector signals, wherein the first detector signal observes the scene in the first wavelength band and the second detector signal observes the scene in the second wavelength band; anda mask signal generating unit which generates ...

Подробнее
14-01-2016 дата публикации

DIGITAL COMPOSITING OF LIVE ACTION AND ANIMATION

Номер: US20160014347A1
Автор: Van Eynde Stephen
Принадлежит:

In a new computer implemented method for digital compositing of live action and animation video clips, a digital live action video layer is received, a digital animation layer is generated without a background, a time and location in the digital live action video layer where the digital animation layer must be superimposed is determined, the digital animation layer is superimposed over the live action video layer at the determined time and location, the superimposition is continued over the length of the live action video layer, with location selection adjusted as needed, and a composite digital video is output. 1receiving a digital live action video layer;generating a digital animation layer without a background;determining a time and location in the digital live action video layer where the digital animation layer must be superimposed;superimposing the digital animation layer over the live action video layer at the determined time and location;continuing the superimposition over the length of the live action video layer, with location selection adjusted as needed; andoutputting a composite digital video.. A computer implemented method for digital compositing of live action and animation video clips, comprising: This application claims the benefit of U.S. Provisional Application No. 62/023,561, filed Jul. 11, 2014, which is hereby incorporated by reference in its entirety.The present invention relates generally to the field of video technology, and more particularly to video compositing.Compositing (blending) of live action footage with animation is a technique that has been done for years in many ways. The earliest example was in 1919 with the silent film “Out of the Inkwell”. In films such as Disney's “Mary Poppins,” the blending of live action and animation was achieved by a process known as chroma keying. Chroma keying allows for the separation or masking of foreground elements to be placed over new backgrounds. This typically involves a blue or green screen ...

Подробнее
14-01-2016 дата публикации

OVERLAY NON-VIDEO CONTENT ON A MOBILE DEVICE

Номер: US20160014350A1
Автор: Osman Steven
Принадлежит: SONY COMPUTER ENTERTAINMENT INC.

Methods, systems, and devices are described for presenting non-video content through a mobile device that uses a video camera to track a video on another screen. In one embodiment, a system includes a video display, such as a TV, that displays video content. A mobile device with an integrated video camera captures video data from the TV and allows a user to select an area in the video in order to hear/feel/smell what is at that location in the video. 1. A system for augmenting a video , the system comprising:a video source configured to provide video content to a video display; and capture video data including the video content from the video display using the video camera, the video content from the video display having at least one marker, the at least one marker including at least one time code which identifies a temporal position corresponding to at least a portion of time in the video content;', 'track the video content in the captured video data;', 'receive a user selection of a subportion of an image in the captured video data;', 'access overlay content associated with the subportion of the image and synchronized with the temporal position of the video content using the at least one time code; and', 'present the accessed non-video content associated with the subportion of the image to a user at substantially the same time as the video content is captured using the video camera., 'a mobile device with a video camera and display, the mobile device configured to2. The system of claim 1 , wherein the at least one marker comprises encoded audio.3. The system of claim 1 , wherein the at least one marker comprises an anchored pattern.4. The system of claim 1 , wherein the mobile device is further configured to identify a size of the video content.5. The system of claim 1 , wherein the mobile device is further configured to identify an orientation of the video content.6. The system of claim 1 , wherein a type of the overlay content comprises content specified by a ...

Подробнее
14-01-2016 дата публикации

METHOD AND SYSTEM FOR CONTROLLING A DEVICE

Номер: US20160014381A1
Принадлежит: AXIS AB

Described is a method for registering and executing instructions in a video capturing device and to a door station. The method comprises receiving at a video capturing device a signal(s) representing a first input made using an authorized device, generating a graphical representation of the received signal(s), superimposing the graphical representation onto video captured by the video capturing device and streamed to the authorized device, receiving at the video capturing device, after the signal(s) representing an input made at the authorized device have been received and graphical representations have been generated and superimposed onto video captured by the video capturing device and streamed to the authorized device, a concluding signal representing a concluding input made using the authorized device, translating, in response to said concluding input, the received signal(s) into an instruction executable by the image capturing device, and executing the instruction resulting from the translation of the signal(s). 1. A method for registering and executing instructions in a video capturing device , comprising:receiving at a communication interface of the video capturing device at least one signal representing a first input made using an authorized device,generating at the video capturing device a graphical representation of the at least one received signal, superimposing at the video capturing device the graphical representation onto video captured by the video capturing device and streamed to the authorized device,receiving at the video capturing device, after the at least one signal representing an input made at the authorized device have been received and graphical representations have been generated and superimposed onto video captured by the video capturing device and streamed to the authorized device, a concluding signal representing a concluding input made using the authorized device,translating, in response to said concluding input, the received at least ...

Подробнее
10-01-2019 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20190014288A1
Принадлежит:

To achieve a configuration enabling identification of to which viewing user the displayed user displayed on the display unit is speaking. The configuration includes a display image generation unit that generates dialog pair identification information enabling identification of to which viewing user among the plurality of viewing users the displayed user on the display unit is speaking, and outputs the generated information together with the displayed user, to the display unit. The display image generation unit generates, as the dialog pair identification information, an arrow or an icon, a face surrounding frame or a face side line, a virtual viewpoint background image or the like, directed from the displayed user forming the dialog pair to the viewing user forming the dialog pair, and displays the generated dialog pair identification information together with the displayed user, on the display unit. 1. An information processing apparatus comprising a display image generation unit that generates dialog pair identification information enabling identification of to which viewing user among a plurality of viewing users a displayed user on a display unit is speaking , and that outputs the generated dialog pair identification information together with the displayed user onto the display unit.2. The information processing apparatus according to claim 1 ,wherein the information processing apparatusinputs image data of the displayed user via a data transmission/reception unit, andthe display image generation unit displays the dialog pair identification information being superimposed on the image data of the displayed user.3. The information processing apparatus according to claim 1 ,wherein the display image generation unitinputs dialog pair determination information, discriminates the displayed user and the viewing user forming a dialog pair from each other on the basis of the input dialog pair determination information, andoutputs, to the display unit, the dialog pair ...

Подробнее
14-01-2021 дата публикации

METHOD AND APPARATUS FOR CAPTURING VIDEO, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20210014431A1
Принадлежит:

Embodiments of the present disclosure provide a method and apparatus for capturing video, an electronic device and a computer readable storage medium. The method includes: receiving a video capture trigger operation from a user via a video playing interface for an original video; superimposing a video capture window on the video playing interface, in response to the video capture trigger operation; receiving a video capture operation from the user via the video playing interface: and capture a user video in response to the video capture operation, and displaying the user video via the video capture window. According to the embodiments of the present disclosure, a user only needs to perform operations related to capturing a user video on the video playing interface, thereby implementing a function of combining video, and the operation process is simple and fast. The user video can represent the user's feelings, comments, or viewing reactions to the original video. Therefore, the user can conveniently display its views or reactions to the original video, thereby improving interactive experience of user 1. A method for capturing video , comprising:receiving a video capture trigger operation from a user via a video playing interface for an original video;superimposing a video capture window on the video playing interface, in response to the video capture trigger operation;receiving a video capture operation from the user via the video playing interface; andcapturing a user video, in response to the video capture operation, and displaying the user video via the video capture window.2. The method according to claim 1 , further comprising:receiving a window movement operation with respect to the video capture window, from the user; andadjusting the video capture window to a corresponding region in the video playing interface, in response to the window movement operation.3. The method according to claim 2 , wherein claim 2 , the adjusting the video capture window:displaying ...

Подробнее
14-01-2021 дата публикации

OVERLAY PROCESSING METHOD IN 360 VIDEO SYSTEM, AND DEVICE THEREOF

Номер: US20210014469A1
Принадлежит: LG ELECTRONICS INC.

A 360 image data processing method performed by a 360 video receiving device, according to the present invention, comprises the steps of: receiving 360 image data; acquiring information and metadata on an encoded picture from the 360 image data; decoding the picture on the basis of the information on the encoded picture; and rendering the decoded picture and an overlay on the basis of the metadata, wherein the metadata includes overlay-related metadata, the overlay is rendered on the basis of the overlay-related metadata, and the overlay-related metadata includes information on a region of the overlay. 1. A 360-degree video data processing method performed by a 360-degree video receiving device , the method comprising:receiving 360-degree video data including encoded pictures;acquiring metadata;decoding pictures; andrendering the decoded pictures and an overlay based on the metadata,wherein:the metadata includes overlay related metadata,the overlay related metadata includes distance information indicating a distance from a center of a unit sphere for representing the 360-degree video,the overlay related metadata includes information of the overlay,the information of the overlay includes a rendering type of overlay in 3D space of the 360-degree video, andthe overlay is rendered based on the overlay related metadata.2. The method of claim 1 , wherein:the distance is identical as a radius of the unit sphere.3. The method of claim 1 , wherein:the overlay related metadata includes information on azimuth and elevation that indicates the azimuth and elevation angles of the center of the overlay region respectively.4. The method of claim 1 , wherein:the overlay related metadata includes range information on azimuth and elevation which indicates an azimuth range and an elevation range through a center point of a sphere region.5. The method of claim 4 , wherein:{'sup': 16', '16, 'the azimuth range is in the range of 0 to 360*2and the elevation range is in the range of 0 to ...

Подробнее
09-01-2020 дата публикации

METHODS AND APPARATUS FOR IMMERSIVE MEDIA CONTENT OVERLAYS

Номер: US20200014906A1
Автор: Chen Lulin, WANG Xin
Принадлежит: MEDIATEK SINGAPORE PTE. LTD.

The techniques described herein relate to methods, apparatus, and computer readable media configured to decode video data. Video data includes video content, overlay content, and overlay metadata that is specified separate from the video content and overlay content. The overlay content is determined to be associated with the video content based on the overlay metadata. The overlay content is overlaid onto the video content in the region of the video content. 1. A decoding method for decoding video data , the method comprising: video content;', 'overlay content; and', 'overlay metadata that is specified separate from the video content and overlay content, wherein the overlay metadata specifies a region of the video content;, 'receiving video data comprisingdetermining the overlay content is associated with the video content based on the overlay metadata; andoverlaying the overlay content onto the video content in the region of the video content.2. The decoding method of claim 1 , wherein receiving the video data comprises receiving a timed metadata track comprising the overlay metadata.3. The decoding method of claim 2 ,wherein the overlay content comprises first overlay content and second overlay content that is different than the first overlay content; and determining, based on the overlay metadata, first overlay content is associated with a first time period and second overlay content is associated with a second time period after the first time period;', 'overlaying the first overlay content on the video content in the region during the first time period; and', 'overlaying the second overlay content on the video content in the region during the second time period., 'the method comprising4. The decoding method of claim 2 , wherein:the overlay content comprises first overlay content and second overlay content that is different than the first overlay content; andthe overlay metadata does not specify whether to overlay the first overlay content or the second overlay ...

Подробнее
16-01-2020 дата публикации

Apparatus and method of mapping a virtual environment

Номер: US20200016499A1

A method of mapping a virtual environment includes: obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; where for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.

Подробнее
03-02-2022 дата публикации

COMPUTING SYSTEM AND A COMPUTER-IMPLEMENTED METHOD FOR SENSING EVENTS FROM GEOSPATIAL DATA

Номер: US20220036087A1
Принадлежит:

A computer-implemented method and computing system for sensing events and optionally and preferably augmenting a video feed with overlay, comprising in some embodiments a data acquisition module, a sensor module, and optionally and preferably an overlay module. By describing the state of an activity with models that capture the semantics of the activity and comparing this description to a library of event patterns, occurrences of events are detected. Detected events are optionally processed by the overlay module to generate video feed augmented with overlay illustrating said events. 1. A computer-implemented method for sensing events during a dynamic activity , the method comprising: a data acquisition step and event sensing step , wherein:a. the data acquisition step comprises the acquisition, by one or more of: video, position-measuring sensors, or digital transfer, a set of geospatial data including the positions of individuals during a time span thereof; [ evaluation of a model graph, comprising a collection of models linked by input-output dependency relationships, with at least one model taking as input at least part of the geospatial data, and', 'storage by digital means of the model outputs, which together provide a high-level description of the activity; and, 'i. the description step comprises'}, 'ii. the event detection step comprises the matching of the description output with patterns representing event types from a pattern library, outputting an event record whenever a match is found., 'b. the event sensing step comprises a description step and an event detection step, wherein'}2. The computer-implemented method of claim 1 , wherein the event detection step further comprises: the model outputs at that timestep are compared to the criteria in the pattern definition using pattern matching criteria comprising one or more inequality relationships (e.g. greater than, less than) defined with reference to model outputs, and', 'in case a match is found, an ...

Подробнее
03-02-2022 дата публикации

Calculation device, information processing method, and storage medium

Номер: US20220036107A1
Автор: Ryuichi Akashi
Принадлежит: NEC Corp

A calculation device includes: an image input unit that receives, as an input, an image acquired by an image acquisition device that photographs a prescribed area; a visibility evaluation unit that calculates an evaluation value showing the visibility of a detection object in the image, on the basis of the contrast of the image and noise information showing the degree of noise included in the image; a calculation unit that calculates a maximum visually recognizable distance, which is the maximum distance from the image acquisition device to the detection object at which the detection object is visually recognized in the image, based on the evaluation value, a value set as the actual size of the detection object in the image, and the image angle of the image acquisition device; and an output unit that generates and outputs output information based on the maximum visually recognizable distance.

Подробнее
03-02-2022 дата публикации

LESSON SYSTEM, LESSON METHOD, AND PROGRAM

Номер: US20220036753A1
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

To provide a lesson system, a lesson method, and a program by which a high learning effect is achieved. A lesson system includes: an experience video storage unit configured to store an experience video based on an actual experience of an experiencer; a sensor configured to detect a motion of a student; a microphone configured to detect a voice of the student; a reproduction processing unit configured to generate a lesson video in which an image of a teacher is superimposed on the experience video; a display unit configured to display the lesson video to the student; a speaker for outputting audio configured to output audio corresponding to the lesson video; and a progression control unit configured to control progression of the lesson video displayed on the display unit in accordance with the result of detection by at least one the sensor and the microphone. 1. A lesson system comprising:a video acquisition unit configured to acquire at least one experience video from a storage unit, the storage unit being configured to store at least one of an experience video based on an actual experience of an experiencer and a simulated experience video;a motion information acquisition unit configured to acquire sensing information related to a motion of a student;an audio information acquisition unit configured to acquire sensing information related to a voice of the student;a reproduction processing unit configured to generate a lesson video in which an image of a teacher is superimposed on the experience video;a video output unit configured to output an output signal for causing a display unit for displaying a video to display the lesson video to the student;an audio output unit configured to output an output signal for causing a speaker for outputting audio to output audio corresponding to the lesson video; anda progression control unit configured to control progression of the lesson video displayed on the display unit by controlling each of the output of the video output ...

Подробнее
21-01-2016 дата публикации

Method of Video Enhancement

Номер: US20160021317A1
Принадлежит:

A system and method for enhancing a stream of video images in real-time. A primary stream of video images of a real event is obtained using broadcast video cameras. That primary stream also contains images of a display-object that change in appearance during the broadcast. At the same time, a stream of images of an agent-object is obtained. This may, for instance, be an animated computer-generated figure and may be stored on a local computer hard drive. The animated figure is choreographed and synchronized to be in time with the change of appearance of the display-object. By combining the primary and secondary image streams using match-moving technology, a composite stream is formed in which the agent-object appears to be causally linked to the display-object.

Подробнее
21-01-2016 дата публикации

Method, Apparatus and System For Regenerating Voice Intonation In Automatically Dubbed Videos

Номер: US20160021334A1
Автор: Dvir Jacob, Rossano Boaz
Принадлежит:

A system and method for automatically dubbing a video in a first language into a second language, comprising: an audio/video pre-processor configured to provide separate original audio and video files of the same media; a text analysis unit configured to receive a first text file of the video's subtitles in the first language and a second text file of the video's sub-titles in the second language, and re-divide them into text sentences; a text-to-speech unit configured to receive the text sentences in the first and second languages from the text analysis unit and produce therefrom first and second standard TTS spoken sentences; a prosody unit configured to receive the first and second spoken sentences, the separated audio file and timing parameters and produce therefrom dubbing recommendations; and a dubbing unit configured to receive the second spoken sentence and the recommendations and produce therefrom an automatically dubbed sentence in the second language. 1. A system for automatically dubbing a video in a first language into a second language , comprising:a Text Analysis Unit configured to receive original subtitles text, timing data and target language selection and translate the subtitle into the target language;a TTS (Text To Speech) Generation Unit configured to generate a standard TTS audio of the translated subtitle text;a Prosody Analysis Unit configured to receive the timing of the TTS translated audio and the timing of the original subtitle and recommend adjustments to the final dubbed subtitle; anda Dubbing Unit configured to implement the recommendations on the TTS translated speech.2. A system for automatically dubbing a video in a first language into a second language , comprising:an audio/video pre-processor configured to provide separate original audio and video files of the same media;a text analysis unit configured to receive a first text file of the video's subtitles in the first language and a second text file of the video's subtitles in ...

Подробнее
21-01-2021 дата публикации

Microscope System and Method for Controlling a Microscope System of this Type

Номер: US20210018741A1
Принадлежит:

A microscope system and a method for controlling the microscope system have a microscope with several microscope components electrically adjustable and/or activatable via a control apparatus at least one objective, one illumination device, and a camera generating a digital microscopic image, and having a control and display device generating control signals for controlling at least one of the components and displaying the microscopic image; the control and display device is connected to the control apparatus and comprising a display area; the operating element with several operating fields; one or several operating fields configured such that the control and display device, upon selection of an operating field, controls one or several of the components and/or means for modifying settings in the control and display device; upon selection of the operating element as a whole, it becomes modified, via within the display area, regarding its position, its size, shape. 1. A microscope system comprising:a microscope having several microscope components and a control apparatus for electrically adjusting and/or activating said several microscope components;at least one microscope objective, at least one microscope illumination device, and a microscope camera for generating a digital microscopic image;a control and display device serving to generate control signals for controlling at least one of the several adjustable and/or activatable microscope components and for displaying the digital microscopic image, the control and display device being communicatingly connected to the control apparatus and comprising a display area for displaying at least a portion of the digital microscopic image with overlaid a virtual graphical operating element;the operating element comprising several operating fields;at least one of the several operating fields being configured such that upon selection of one of the several operating fields the control and display device, applies control to the ...

Подробнее
21-01-2021 дата публикации

AUGMENTED REALITY MICROSCOPE FOR PATHOLOGY WITH OVERLAY OF QUANTITATIVE BIOMARKER DATA

Номер: US20210018742A1
Автор: STUMPE Martin
Принадлежит:

A microscope of the type used by a pathologist to view slides containing biological samples such as tissue or blood is provided with the projection of enhancements to the field of view, such as a heatmap, border, or annotations, or quantitative biomarker data, substantially in real time as the slide is moved to new locations or changes in magnification or focus occur. The enhancements assist the pathologist in characterizing or classifying the sample, such as being positive for the presence of cancer cells or pathogens. 1. A method for assisting a user in review of a slide containing a biological sample with a microscope having an eyepiece comprising the steps of:(a) capturing, with a camera, a digital image of a view of the sample as seen through the eyepiece of the microscope,(b) using a first machine learning pattern recognizer to identify one or more areas of interest in the sample from the image captured by the camera, and a second machine pattern recognizer trained to identify individual cells and(c) superimposing an enhancement to the view of the sample as seen through the eyepiece of the microscope as an overlay, wherein the enhancement is based upon the identified areas of interest in the sample and further comprises quantitative data associated with the areas of interest,(d) wherein, when the sample is moved relative to the microscope optics or when a magnification or focus of the microscope changes, a new digital image of a new view of the sample is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is superimposed onto the new view of the sample as seen through the eyepiece in substantial real time.2. The method of claim 1 , wherein the one or more areas of interest comprise cells positive for expression of a protein and wherein the quantitative data comprises a percent of the cells in the view as being positive for such protein expression.3. The method of claim 2 , wherein the protein comprises Ki-67 ...

Подробнее
17-04-2014 дата публикации

VIDEO PROCESSING DEVICE

Номер: US20140104457A1
Автор: Kato Akihiro
Принадлежит: HITACHI KOKUSAI ELECTRIC INC.

Provided is a video processing device with which it is possible to efficaciously carry out a process of superpositioning a monitor character signal in a video signal which is inputted from a camera and outputting same separately from the video signal. The video processing device comprises: a character signal emitter which generates a character signal on the basis of the monitor data; and a character superposition unit which superpositions the character signal which is generated by the character signal emitter upon the least significant bits of the color difference signal of the video signal which is inputted from the camera. With respect to downconverting the video signal whereupon the character signal has been superpositioned, only a downsampling process on the least significant bits of the color difference signal is carried out, and an interpolation filter process is not carried out. 1. A video processing device for interposing a character signal that indicates a state of a camera or a state of a camera control device for controlling an operation of the camera on a video signal inputted from the camera and for outputting a resulting signal separately from the video signal , comprising:a character signal generator that generates a character signal on the basis of the state of the camera or the state of the camera control device; anda character signal superposing unit that superposes the character signal generated by the character signal generator on a least significant bit of a color difference signal of the video signal inputted from the camera.2. The video processing device according to claim 1 , comprising:a character signal visualizing unit that visualizes the character signal superposed on the video signal by converting a color difference signal in the video signal on which the character signal is superposed and a brightness signal of the video portion where a value for indicating a character unit is stored in the least significant bit of the color difference ...

Подробнее
17-01-2019 дата публикации

UNATTENDED OBJECT MONITORING DEVICE, UNATTENDED OBJECT MONITORING SYSTEM EQUIPPED WITH SAME, AND UNATTENDED OBJECT MONITORING METHOD

Номер: US20190019296A1

It is possible to provide a user with progress information of an unattended object after the appearance in a monitoring area. An unattended object monitoring device is configured to include an image acquirer that acquires a captured image of the monitoring area imaged by the imaging device, an object tracker that detects an object appearing in the monitoring area from the captured image and tracks between the captured images for each appearing object, an unattended object detector that detects an appearing object not displaced beyond a predetermined time as an unattended object based on a tracking result for each appearing object, a progress information generator that generates progress information of the unattended object after the appearance in monitoring area based on the tracking result for each appearing object, and a notification image generator that generates notification image by superimposing the progress information on the captured image. 1. An unattended object monitoring device that detects an unattended object left behind in a monitoring area based on a captured image of the monitoring area imaged by an imaging device , the device comprising:a processor; anda memory that stores an instruction,the device further comprising, as a configuration when the processor executes the instruction stored in the memory;an image acquirer that acquires a captured image of the monitoring area imaged by the imaging device;an object tracker that detects an appearing object appearing in the monitoring area from the captured image and tracks between the captured images for each appearing object;an unattended object detector that detects the appearing object not displaced beyond a predetermined time as the unattended object based on a tracking result for each appearing object by the object tracker;a progress information generator that generates progress information of the unattended object after the appearance in the monitoring area based on the tracking result for each ...

Подробнее
03-02-2022 дата публикации

VIDEO PROCESSING METHOD, APPARATUS AND STORAGE MEDIUM

Номер: US20220038641A1

The present disclosure relates to a video processing method and apparatus, and a storage medium. The method is applied to a terminal and includes: a background frame for a time static special effect is determined from video frames in a video to be processed; for each of the video frames in the video, an image area, where a target object is located is acquired from the respective video frame and the image area is fused into the background frame to generate a special effect frame with the time static special effect. 1. A video processing method , applied to a terminal , comprising:selecting, from video frames in a video to be processed, a background frame for a static special effect; andfor each of the video frames in the video, acquiring an image area, where a target object is located, from the respective video frame, and fusing the image area into the background frame, to generate a special effect frame with the static special effect.2. The method of claim 1 , wherein fusing the image area into the background frame claim 1 , to generate the special effect frame with the static special effect comprises:for each of video frames starting from a start frame in the video, fusing the image area, which is acquired from the respective video frame, into the background frame, to generate the special effect frame with the static special effect; or,for each of video frames between a start frame and an end frame in the video, fusing the image area, which is acquired from the respective video frame, into the background frame, to generate the special effect frame with the static special effect.3. The method of claim 2 , further comprising:displaying the video frames in the video when being recorded;in response to a number of video frames in a predetermined storage space not exceeding a storage amount threshold, buffering the video frames starting from the start frame; andfusing the image area, which is acquired from each of the buffered video frames, into the background frame to ...

Подробнее
03-02-2022 дата публикации

METHOD AND DEVICE FOR PROCESSING VIDEO, AND STORAGE MEDIUM

Номер: US20220038642A1

A method, apparatus, and a non-transitory computer-readable storage medium for processing a video are provided. A terminal determines a subject region of a video frame in a video and a background region. A target object is located in the subject region. The background region is a region of the video frame other than the subject region. The terminal overlays the subject region in at least one of a first video frame having the target object on at least one of a second video frame having the target object and generates a special effect frame including at least two subject regions in each of which the target object is located. 1. A method for processing a video , the method applied to a terminal , and comprising:determining a subject region and a background region of a video frame in a video, wherein a target object is located in the subject region, and wherein the background region is a region of the video frame other than the subject region; andoverlaying the subject region in at least one of a first video frame having the target object on at least one of a second video frame having the target object, and generating a special effect frame that comprises at least two subject regions in each of which the target object is located.2. The method of claim 1 , wherein determining the subject region and the background region of the video frame in the video comprises:identifying the target object in the video frame in the video; anddetermining the subject region and the background region according to the target object.3. The method of claim 1 , wherein overlaying the subject region in the at least one of the first video frame having the target object on the at least one of the second video frame having the target object claim 1 , and generating the special effect frame comprises:selecting at least one of a freeze-frame from a video frame of the video, wherein the first video frame comprises the freeze-frame, and wherein the second video frame comprises the video frame after ...

Подробнее
03-02-2022 дата публикации

COMPUTING SYSTEM AND A COMPUTER-IMPLEMENTED METHOD FOR SENSING GAMEPLAY EVENTS AND AUGMENTATION OF VIDEO FEED WITH OVERLAY

Номер: US20220038643A1
Принадлежит:

A computer-implemented method and computing system for sensing gameplay events and optionally and preferably augmenting a video feed with overlay, comprising in some embodiments a data acquisition module, a sensor module, and optionally and preferably an overlay module. By describing the state of gameplay with models that capture the semantics of the game and comparing this description to a library of event patterns using one or more pattern matchers defining different ways of evaluating criteria, occurrences of events are detected. Detected events are processed by the overlay module to generate video feed augmented with overlay illustrating said events. 1. A computer-implemented method , comprising: a data acquisition step and event sensing step; wherein:a. the data acquisition step comprises the acquisition, by one or more of: video, position-measuring sensors, or digital transfer, a set of sporting event data including the positions of individuals during a time span thereof; [ evaluation of a model graph, comprising a collection of models linked by input-output dependency relationships, with at least one model taking as input at least part of the sporting event data, and', 'storage by digital means of the model outputs, which together provide a high-level description of the gameplay; and, 'i. the description step comprises'}, 'ii. the event detection step comprises matching of the gameplay description with patterns representing event types from a pattern library, outputting an event record whenever a match is found., 'b. the event sensing step comprises a description step and an event detection step, wherein'}2. The computer-implemented method of claim 1 , wherein the event detection step further comprises: the model outputs at that timestep are compared to the criteria in the pattern definition using pattern matching criteria comprising one or more inequality relationships (e.g. greater than, less than) defined with reference to model outputs, and', 'in case a ...

Подробнее
03-02-2022 дата публикации

SYSTEMS AND METHODS FOR CUSTOMIZING AND COMPOSITING A VIDEO FEED AT A CLIENT DEVICE

Номер: US20220038767A1
Принадлежит:

An embodiment of a process for providing a customized composite video feed at a client device includes receiving a background video feed from a remote server, receiving (via the communications interface) content associated with one or more user-specific characteristics, and determining one or more data elements based at least in part on the received content. The process includes generating a composite video feed customized to the one or more user-specific characteristics including by matching at least corresponding portions of the one or more data elements to corresponding portions of the background video feed, and displaying the composite video feed on a display device of the client device. 1. (canceled)2. A device comprising:a communications interface; and receive a background video feed from a remote server, wherein the background video feed is associated with a sporting event;', at least part of the video feed customized to the user is focused on a target object; and', 'the target object is selected based at least in part on the user-specific data; and, 'generate a video feed customized to a user based at least in part on (i) the background video feed, and (ii) a user-specific data, wherein, 'provide the video feed customized to the user to be displayed on a display device, 'a processor coupled to the communications interface, the processor configured to3. The device of claim 2 , wherein the target object is a particular player that is participating in the sporting event.4. The device of claim 3 , wherein the video feed is customized to the user to display footage of the sporting event focused on actions of the particular player in connection with the sporting event.5. The device of claim 3 , wherein the video feed is customized to the user to highlight the particular player during at least part of the sporting event.6. The device of claim 2 , wherein the target object is a ball.7. The device of claim 2 , wherein the background video feed is received while the ...

Подробнее
18-01-2018 дата публикации

Multi Background Image Capturing Video System

Номер: US20180020167A1
Автор: Hammond Daniel
Принадлежит:

A system which interleaves two or more backgrounds to create two or more different versions of a video, with different backgrounds. One of the videos has an image background, and another of the videos has a chromakey background. The system can separate the two versions of the video which have two different backgrounds. 1. A system of obtaining a video , comprisingan image processing system, obtaining a video formed of multiple frames of information, said frames including objects in front of a background,Said video having groups of n frames, where n is 2 or greater, where each group of frames includes adjacent-in-time frames which have different backgrounds,with a first frame of the group including a first background behind first objects, and a second frame of the group including a second background different than the first background behind said first objects at a time directly adjacent to a time of the first frame.2. The system as in claim 1 , wherein said first background is an image of a scene claim 1 , and the background of the scene illuminates the first objects claim 1 , and said second background is a background with at least one chromakey color.3. The system as in claim 2 , wherein said second background is a single color background.4. The system as in claim 2 , wherein said second background is multiple color background claim 2 , including multiple different chromakey colors arranged in a grid claim 2 , with each adjoining two items of the grid having different colors adjoining one another at all boundaries with the other colors.5. The system as in claim 2 , further comprising an image processor claim 2 , receiving said video claim 2 , and separating said video into a first chromakey background video and a second image background video.6. The system as in claim 1 , further comprising a camera claim 1 , obtaining said video claim 1 , a background screen claim 1 , that displays an image as a background for said video claim 1 , and a processor that ...

Подробнее
18-01-2018 дата публикации

Cable tiemethod for transmitting chroma-keyed videos to mobile phones

Номер: US20180020168A1
Принадлежит: Individual

A system for transmitting a chroma-keyed video between at least two mobile communication devices. The system includes a first mobile communication device for creating the chroma-keyed video through a software Application and transmitting the chroma-keyed video. A Cloud receives the chroma-keyed video from the first mobile communication device. A second mobile communication device receives the chroma-keyed video from the Cloud. The software Application within the second mobile communication device manipulates the chroma-keyed video received from the Cloud.

Подробнее
21-01-2021 дата публикации

Context-Aware Video Subtitles

Номер: US20210019369A1
Принадлежит: Adobe Inc.

Generation of context-aware subtitles for a video is described. A subtitle context system receives video subtitles, extracts words from the subtitles, and determines a part of speech describing each word's use in the video. The subtitle context system then determines a difficulty score for each of the words based on the length of the word, a frequency with which the word is used in the language of the video, and a language proficiency score for a user viewing the video. To identify which words of the subtitles are likely difficult for the viewing user to understand, the subtitle context system compares the computed difficulty scores to a difficulty score threshold. For each word having a difficulty score that satisfies the threshold, a definition and one or more synonyms are ascertained. During video playback, the definition and synonyms for difficult to understand words are displayed concurrently with the word. 1. In a digital medium environment to provide contextual meaning for difficult words in video subtitles , a method implemented by at least one computing device , the method comprising:receiving, by the at least one computing device, subtitles for a video, the subtitles including a plurality of words and a plurality of timestamps that indicate when the plurality of words are to be displayed during playback of the video;identifying, by the at least one computing device, a language proficiency score associated with a user viewing the video, the language proficiency score indicating the user's ability to understand a language in which the subtitles are written;determining, by the at least one computing device, for one of the plurality of words, a usage frequency of the word in the language in which the subtitles are written;ascertaining, by the a least one computing device, a length of the word;calculating, by the at least one computing device, a difficulty score for the word based on the usage frequency of the word, the length of the word, and the language ...

Подробнее
22-01-2015 дата публикации

INTERCHANGEABLE-LENS CAMERA, AND VIEWFINDER DISPLAY METHOD

Номер: US20150022694A1
Принадлежит: FUJIFILM Corporation

The information about an interchangeable lens is acquired, and the size of an image capture range, which is the range corresponding to a capture image in an optical image, is calculated based on the information about the interchangeable lens. When the image capture range is smaller than or equal to a range to be shown by the angular field of the optical image, a first image showing the image capture range in the optical image is displayed on a display device, and when the image capture range is larger than the range to be shown by the angular field of the optical image, a second image different from the first image showing the image capture range in the optical image is displayed on the display device such that the second image is superimposed on at least either of the four corner vicinities and four side vicinities of the optical image. 1. An interchangeable-lens camera comprising:a camera mount on which an interchangeable lens is mounted;an image capture device for acquiring a capture image, based on subject light transmitted through the interchangeable lens;an optical viewfinder for leading a rectangular optical image of a subject to an eyepiece unit, through a path different from the capture image;a display device for displaying an image;an image superimposition device for superimposing the image displayed by the display device on the optical image, and leading the image to the eyepiece unit;an acquisition device for acquiring information about the interchangeable lens mounted on the camera mount;an image capture range determination device for determining an image capture range in the optical image, based on the acquired information about the interchangeable lens, the image capture range being a range corresponding to the capture image; anda display control device for making the display device display a first image, when the image capture range is smaller than or equal to a range to be shown by an angular field of the optical image, and making the display device ...

Подробнее
17-01-2019 дата публикации

SYSTEMS AND METHODS FOR FILTERING AND PRESENTING OPTICAL BEACONS OR SIGNALS

Номер: US20190020856A1
Принадлежит:

Systems and methods for optical narrowcasting are provided for transmitting various types of content. Optical narrowcasting content indicative of the presence of additional information along with identifying information may be transmitted. The additional information (which may include meaningful amounts of advertising information, media, or any other content) may also be transmitted as optical narrowcasting content. Elements of an optical narrowcasting system may include optical transmitters and optical receivers which can be configured to be operative at distances ranging from, e.g., 400 meters to 1200 meters. Additionally, the elements can be implemented on a miniaturized scale in conjunction with small, user devices such as smartphones. Moreover, optically narrowcast content may be filtered using at least identification data extracted from optical beacons received from optical transmitters such that only optically narrowcast content of interest is presented on a display and/or stored in a persistent storage. 1. A system , comprising:an optical receiver assembly to receive a plurality of optical beacons, each of the plurality of optical beacons transmitted by a respective optical transmitter assembly;a processor; and obtaining a filter for filtering the presentation of data received from each of the plurality of optical transmitter assemblies;', 'extracting identification data from each of the plurality of received optical beacons;', 'applying the filter to the extracted identification data from each of the plurality of received optical beacons to determine whether to present data extracted from modulated optical beams received from the optical transmitter assembly that transmitted the optical beacon; and', 'presenting data extracted from modulated optical beams received from optical transmitter assemblies that transmitted an optical beacon including identification data that satisfies the filter., 'a non-transitory computer readable medium having instructions ...

Подробнее
16-01-2020 дата публикации

SYSTEMS AND METHODS FOR AUTOMATIC DETECTION AND INSETTING OF DIGITAL STREAMS INTO A 360-DEGREE VIDEO

Номер: US20200021750A1
Принадлежит: FUJI XEROX CO., LTD.

Systems and methods for automatic detection and insetting of digital streams into a 360 video. Various examples of such regions of interest to the user include, without limitation, content displayed on various electronic displays or written on electronic paper (electronic ink), content projected on various surfaces using electronic projectors, content of paper documents appearing in the 360-degree video and/or content written on white (black) boards inside the 360-degree video. For some content, such as whiteboards, paper documents or paintings in a museum, a participant (or curator) could have taken pictures of the regions, again stored digitally somewhere and available for download. These digital streams with content of interest to the user are obtained and then inset onto the 360-degree view generated from the raw 360-degree video feed, giving users the ability to view them at their native high resolution. 1. A system comprising:a. at least one camera for acquiring a 360-degree video of an environment; andb. a processing unit for identifying at least one inset candidate within the acquired 360-degree video and insetting a media into the identified at least one inset candidate.2. The system of claim 1 , wherein the inset candidate is a region within the 360-degree video.3. The system of claim 2 , wherein the region within the 360-degree video is a surface.4. The system of claim 2 , wherein the region within the 360-degree video is a display screen.5. The system of claim 2 , wherein the region within the 360-degree video is a whiteboard.6. The system of claim 1 , wherein the media is an image.7. The system of claim 1 , wherein the media is a video stream.8. The system of claim 1 , wherein a resolution of the media is higher than the resolution of the 360-degree video.9. The system of claim 1 , wherein the inset media is cut based on detected occlusion of the inset candidate.10. The system of claim 9 , wherein the inset media is cut using a mask.11. The system of ...

Подробнее
16-01-2020 дата публикации

IMAGE PROCESSING APPARATUS AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20200021751A1
Автор: OTAKA Masaru
Принадлежит:

An image processing apparatus is provided, the image processing apparatus including: a vehicle image information acquiring unit that acquires vehicle image information including at least one captured image captured by a vehicle, and an image-capturing position of the captured image; a road image acquiring unit that acquires at least one road image corresponding to the image-capturing position from a plurality of road images captured by a vehicle; and a superimposed image generating unit that generates a superimposed image in which the road image is superimposed on the captured image. 1. An image processing apparatus comprising:a vehicle image information acquiring unit that acquires vehicle image information including at least one captured image captured by a vehicle, and an image-capturing position of the captured image;a road image acquiring unit that acquires at least one road image corresponding to the image-capturing position from a plurality of road images captured by a vehicle; anda superimposed image generating unit that generates a superimposed image in which the road image is superimposed on the captured image.2. The image processing apparatus according to claim 1 , whereinthe vehicle image information acquiring unit acquires first vehicle image information including a first captured image captured by a first vehicle, and an image-capturing position of the first captured image,the superimposed image generating unit generates a first superimposed image in which the road image is superimposed on the first captured image, andthe image processing apparatus comprises a superimposed image sending unit that sends, to the first vehicle, the first superimposed image generated by the superimposed image generating unit.3. The image processing apparatus according to claim 1 , whereinthe vehicle image information acquiring unit acquires first vehicle image information including a first captured image captured by a first vehicle, and an image-capturing position of the ...

Подробнее
16-01-2020 дата публикации

SKELETON-BASED EFFECTS AND BACKGROUND REPLACEMENT

Номер: US20200021752A1
Принадлежит: Fyusion, Inc.

Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image. 1. A method comprising:processing a request, received via an input interface on the mobile device, to generate a multi-view interactive digital media representation (MVDIMR) of an object including a selection of effects that 1) augment a background surrounding the object, 2) augment the object with a structure or 3) combinations thereof;recording a live video stream including a plurality of frames from a camera of the mobile device as the mobile device moves along a trajectory wherein an orientation of the camera varies along the trajectory such that the object in the video stream is captured from a plurality of camera views; selecting first frames from among the plurality of frames to utilize in the MVIDMR each of the first frames including the object, and', 'generating a first skeleton detection and a first segmentation for each of the first frames to determine where to apply the selection of effects in each of the first frames;, 'during recording of the live video streamafter selecting the first frames: performing an image stabilization on the first frames to smooth variations in i) a position of the object, ii) a scale of the object and iii) an orientation of the object that occur between the first frames to generate second frames;generating, using the second ...

Подробнее
16-01-2020 дата публикации

METHOD AND APPARATUS FOR OVERLAY PROCESSING IN 360 VIDEO SYSTEM

Номер: US20200021791A1
Принадлежит: LG ELECTRONICS INC.

Provided is a 360-degree image data processing method performed by a 360-degree video reception apparatus. The method includes receiving 360-degree image data, obtaining information on an encoded picture and metadata from the 360-degree image data, decoding a picture based on the information on the encoded picture, and rendering the decoded picture and an overlay based on the metadata, in which the metadata includes overlay related metadata, the overlay is rendered based on the overlay related metadata, and the overlay related metadata include group information of the overlay. 1. A 360-degree image data processing method performed by a 360-degree video reception apparatus , the method comprising:receiving 360-degree image data;obtaining information on an encoded picture and metadata from the 360-degree image data;decoding a picture based on the information on the encoded picture; andrendering the decoded picture and an overlay based on the metadata,wherein the metadata includes overlay related metadata,wherein the overlay is rendered based on the overlay related metadata, and wherein the overlay related metadata include group information of the overlay.2. The method of claim 1 , wherein the group information of the overlay includes information on overlays switchable to each other.3. The method of claim 2 , wherein the information on overlays switchable to each other includes identification information for the overlays indicated by ref_overlay_IDs field.4. The method of claim 1 , wherein the overlay related metadata includes identification information for an overlay that is currently active.5. The method of claim 1 , wherein the group information of the overlay includes information indicating a main media to be rendered with the overlay claim 1 , andwherein the decoded picture comprise the main media.6. The method of claim 5 , wherein the information indicating the main media to be rendered with the overlay is indicated by EntityToGroupBox with grouping_type field.7. ...

Подробнее
23-01-2020 дата публикации

Digital content processing and generation for a virtual environment

Номер: US20200022632A1
Принадлежит: Limbix Health Inc

An artificial reality system and method provides immersive digital content to a user via a device with limited capabilities. A digital content processor generates a video stream of a real-world environment in a first video resolution regime. The digital content processor identifies static regions across frames of the video stream. The digital content processor applies one or more of a stitching operation, a blending operation, and a layering operation to replace static regions of the video stream with still image pixels. The digital content processor transmits the modified video stream to a display unit of a virtual reality (VR) device at a second video resolution regime of lower resolution than the first video resolution regime.

Подробнее
24-01-2019 дата публикации

IMAGE PROCESSING APPARATUS FOR VEHICLE

Номер: US20190023181A1
Автор: Watanabe Kazuya
Принадлежит: AISIN SEIKI KABUSHIKI KAISHA

An image processing apparatus for a vehicle includes an additional-image generation portion generating an additional image to be added to a captured image, an output image generation portion generating an output image including the captured image and the additional image. The additional image includes a first marking image indicating a first line being away from an end portion of a vehicle in a vehicle width direction by a vehicle width of the vehicle or longer, the first line being along a vehicle front and rear direction. 1. An image processing apparatus for a vehicle , the apparatus comprising:an additional-image generation portion generating an additional image to be added to a captured image;an output image generation portion generating an output image including the captured image and the additional image; andthe additional image including a first marking image indicating a first line being away from an end portion of a vehicle in a vehicle width direction by a vehicle width of the vehicle or longer, the first line being along a vehicle front and rear direction.2. The image processing apparatus for a vehicle according to claim 1 , comprising:a partition line detection portion detecting a partition line on a road, whereinin a case where the partition line is not detected by the partition line detection portion, the additional-image generation portion generates the first marking image serving as the additional image.3. The image processing apparatus for a vehicle according to claim 2 , wherein the additional image includes a second marking image indicating a second line arranged between the end portion of the vehicle in the vehicle width direction and the first line claim 2 , and the second line is along the vehicle front and rear direction.4. The image processing apparatus for a vehicle according to claim 1 , wherein the additional image includes a third marking image indicating a band-shaped region positioned in the vehicle width direction of the vehicle and ...

Подробнее
22-01-2015 дата публикации

DIGITAL BROADCAST RECEIVER AND METHOD FOR PROCESSING CAPTION THEREOF

Номер: US20150026727A1
Автор: PARK Tae Jin
Принадлежит: LG ELECTRONICS INC.

A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal. 1multiplexing video data, audio data, and supplementary data into MPEG2 transport stream, the supplementary data including an event information table (EIT) and a program map table (PMT),wherein the EIT or the PMT includes a caption service descriptor,wherein the caption service descriptor includes caption information indicating whether a digital television closed caption service is present in the video data or a line 21 closed caption service is present in the video data in accordance with electronic industry association (EIA) 708,wherein the caption service descriptor further includes information indicating the number of closed caption services present in associated EIT event,wherein the caption service descriptor further includes language information defining a language associated with the closed caption service,wherein the caption service descriptor further includes information indicating the closed caption service is formatted for displays with 16:9 aspect ratio,wherein the caption service descriptor includes a caption service number that is defined only when the digital television closed caption service in accordance with electronic industry ...

Подробнее
22-01-2015 дата публикации

DIGITAL BROADCAST RECEIVER AND METHOD FOR PROCESSING CAPTION THEREOF

Номер: US20150026731A1
Автор: PARK Tae Jin
Принадлежит: LG ELECTRONICS INC.

A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal. 1multiplexing video data, audio data, and supplementary data into MPEG2 transport stream, the supplementary data including an event information table (EIT) and a program map table (PMT),wherein the EIT or the PMT includes a caption service descriptor,wherein the caption service descriptor includes caption information indicating whether a digital television closed caption service is present in the video data or a line 21 closed caption service is present in the video data in accordance with electronic industry association (EIA) 708,wherein the caption service descriptor further includes language information defining a language associated with the closed caption service,wherein the caption information is set when the digital television closed caption service is present in accordance with EIA 708 and the caption information is clear when the line 21 closed caption service is present in accordance with EIA 708, andwherein the caption service descriptor includes a caption service number that is defined only when the digital television closed caption service in accordance with electronic industry association (EIA) 708 is present; andtransmitting the digital ...

Подробнее
22-01-2015 дата публикации

DIGITAL BROADCAST RECEIVER AND METHOD FOR PROCESSING CAPTION THEREOF

Номер: US20150026732A1
Автор: PARK Tae Jin
Принадлежит: LG ELECTRONICS INC.

A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal. 1multiplexing video data, audio data, and supplementary data into MPEG2 transport stream, the supplementary data including an event information table (EIT) and a program map table (PMT),wherein the EIT or the PMT includes a caption service descriptor,wherein the caption service descriptor includes caption information indicating whether a digital television closed caption service is present in the video data or a line 21 closed caption service is present in the video data in accordance with electronic industry association (EIA) 708,wherein the caption service descriptor further includes language information defining a language associated with the closed caption service,wherein the caption information is set when the digital television closed caption service is present in accordance with EIA 708 and the caption information is clear when the line 21 closed caption service is present in accordance with EIA 708,wherein the caption service descriptor further includes information indicating the closed caption service is formatted for displays with 16:9 aspect ratio, andwherein the caption service descriptor includes a caption service number that is defined only ...

Подробнее
22-01-2015 дата публикации

DIGITAL BROADCAST RECEIVER AND METHOD FOR PROCESSING CAPTION THEREOF

Номер: US20150026733A1
Автор: PARK Tae Jin
Принадлежит: LG ELECTRONICS INC.

A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal. 1multiplexing video data, audio data, and supplementary data into MPEG2 transport stream, the supplementary data including an event information table (EIT) and a program map table (PMT),wherein the EIT or the PMT includes a caption service descriptor,wherein the caption service descriptor includes caption information indicating whether a digital television closed caption service is present in the video data or a line 21 closed caption service is present in the video data in accordance with electronic industry association (EIA) 708,wherein the caption information is set when the digital television closed caption service is present in accordance with EIA 708 and the caption information is clear when the line 21 closed caption service is present in accordance with EIA 708;wherein the caption service descriptor further includes information indicating the closed caption service is formatted for displays with 16:9 aspect ratio, andwherein the caption service descriptor includes a caption service number that is defined only when the digital television closed caption service in accordance with electronic industry association (EIA) 708 is present; andtransmitting ...

Подробнее
26-01-2017 дата публикации

Media composition using aggregate overlay layers

Номер: US20170024916A1
Принадлежит: Microsoft Technology Licensing LLC

Techniques and constructs for media composition using aggregate overlay layers. For instance, a media compositor receives a media source and at least two media overlays, where each of the at least two media overlays are to be composed with the media source. The media compositor then generates an output media composition by adding each of the at least two media overlays to the media source in a single processing step. In some examples, the media compositor adds the at least two media overlays to the media source using a preconfigured compositor. In other examples, the media compositor adds the at least two media overlays to the media source using a custom compositor.

Подробнее
26-01-2017 дата публикации

VIDEO PLAYBACK DEVICE AND METHOD

Номер: US20170025039A1
Принадлежит:

A video playback device includes a processor that executes a procedure. The procedure includes: for each of plural videos, receiving designations of positions in display regions of the videos; and adjusting a placement position in the display regions of the plural videos such that the positions designated for each of the plural videos are aligned at the same position in a vertical direction or a horizontal direction. 1. A non-transitory recording medium storing a video playback program that causes a computer to execute a process , the process comprising:for each of a plurality of videos, receiving designations of positions in display regions of the videos; andadjusting a placement position of the display regions of the plurality of videos such that the positions designated for each of the plurality of videos are aligned at the same position in a vertical direction or a horizontal direction.2. The non-transitory recording medium of claim 1 , wherein claim 1 , in the process claim 1 , the designations of the positions in the display regions of the videos are received during playback of each of the plurality of videos.3. The non-transitory recording medium of claim 1 , wherein claim 1 , in the process claim 1 , adjusting the placement position includes adjusting a vertical direction position or a horizontal direction position of the plurality of videos claim 1 , and displaying the plurality of videos in a superimposed state.4. The non-transitory recording medium of claim 1 , wherein claim 1 , in the process claim 1 , adjusting the placement position includes displaying the plurality of videos in a superimposed state such that the positions designated for each of the plurality of videos are at the same position in the display regions.5. The non-transitory recording medium of claim 1 , wherein claim 1 , in the process claim 1 , adjusting the placement position includes displaying the plurality of videos in a superimposed state such that the positions designated for each ...

Подробнее
25-01-2018 дата публикации

VEHICULAR CONTROL SYSTEM WITH TRAILERING ASSIST FUNCTION

Номер: US20180025237A1
Принадлежит:

A vehicular control system includes a camera having an exterior field of view at least rearward of the vehicle and operable to capture image data. A trailer is attached to the vehicle and image data captured by the camera includes image data captured when the vehicle is maneuvered with the trailer at an angle relative to the vehicle. The vehicular control system determines a trailer angle of the trailer and is operable to determine a path of the trailer responsive at least to a steering angle of the vehicle and the determined trailer angle of the trailer. The vehicular control system determines an object present exterior of the vehicle and the vehicular control system distinguishes a drivable surface from a prohibited space, and the vehicular control system plans a driving path for the vehicle that neither impacts the object nor violates the prohibited space. 1. A vehicular control system , said vehicular control system comprising:a camera disposed at a vehicle and having an exterior field of view at least rearward of the vehicle;wherein said camera is operable to capture image data;wherein a trailer is attached to the vehicle;an image processor operable to process captured image data;wherein image data captured by said camera during maneuvering of the vehicle and the trailer includes image data captured by said camera when the vehicle is maneuvered with the trailer at an angle relative to the vehicle;wherein said vehicular control system, responsive at least in part to image processing by said image processor of image data captured by said camera, determines a trailer angle of the trailer;wherein said vehicular control system is operable to determine a path of the trailer responsive at least to a steering angle of the vehicle and the determined trailer angle of the trailer;wherein said vehicular control system determines an object present exterior of the vehicle which ought not to be impacted during maneuvering of the vehicle and the trailer;wherein said vehicular ...

Подробнее
10-02-2022 дата публикации

CLASSIFYING AUDIO SCENE USING SYNTHETIC IMAGE FEATURES

Номер: US20220044071A1
Принадлежит: Microsoft Technology Licensing, LLC

A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features. 1. A computing system comprising: a discriminator configured to determine whether a target feature is real or synthetic;', 'a generator having been trained on an audio-visual pair of image data and first audio data with the discriminator;', 'a classifier having been trained on second audio data; and', the generator configured to generate synthetic image features from third audio data; and', 'the classifier configured to classify a scene of the third audio data based on the synthetic image features., 'instructions that cause the processor to execute, at runtime], 'a processor having associated memory storing2. The computing system of claim 1 , wherein the memory further stores:an encoder configured to receive an input image of a plurality of input images and encode the input image into real image features; anda decoder configured to receive from the encoder the real image features and decode the real image features into a reconstructed image.3. The computing system of claim 2 , whereinat training time, the generator generated first synthetic image features from the first audio data and generated second synthetic image features from the second audio data, and training the encoder and the decoder to increase a correlation of each of the reconstructed image and the first synthetic image ...

Подробнее
28-01-2016 дата публикации

METHOD OF REPLACING OBJECTS IN A VIDEO STREAM AND COMPUTER PROGRAM

Номер: US20160028968A1
Автор: AFFATICATI Jean-Luc
Принадлежит:

The invention relates to a method for replacing objects in a video stream. A stereoscopic view of the field is created. It serves to measure the distance from the camera and to determine the foreground, background and occluding objects. The stereoscopic view can be provided by a 3D camera or it can be constructed using the signal coming from a single camera or more. The texture of the objects to be replaced can be static or dynamic. The method does not require any particular equipment to track the camera position and it can be used for live content as well as archived material. The invention takes advantage of the source material to be replaced in the particular case when the object to be replaced is filled electronically. 1. A method for replacing objects in a video stream comprising: 'analyzing the one or more images to extract the camera pose parameters, the camera pose parameters at least including x, y, and z axis coordinates and direction of the camera;', 'receiving one or more images from at least one camera'}creating a stereoscopic view using a depth table for objects viewed by the camera, wherein the depth table defines a distance along the z-axis from a camera lens to each object in a field of view of the camera, the depth table comprising a plurality of pixels having z values, wherein pixels are grouped into objects based on the z values;identifying a foreground object that occludes a background object using the stereoscopic view and the depth table;detecting foreground object contours;creating an occlusion mask using the foreground object contours;calculating a replacement image using the camera pose parameters; and applying the occlusion mask to the replacement image.2. The method according to claim 1 , wherein the stereoscopic view is created based on images received from at least two cameras.3. The method according to claim 1 , wherein extracting the camera pose parameters includes:detecting if a cut with between a current image and a previous image ...

Подробнее
28-01-2016 дата публикации

DIGITAL PHOTOGRAPHING APPARATUS AND METHOD OF CONTROLLING THE DIGITAL PHOTOGRAPHING APPARATUS

Номер: US20160028969A1
Принадлежит:

A digital photographing apparatus, computer readable medium, and a method of controlling the digital photographing apparatus, the method including selecting a template image; receiving an image including a subject and detecting the subject from the received image; and displaying the template image with an image of the subject included in a view area. The displaying may include displaying the template image with an image of the subject included in a view area corresponding to a location of the detected subject. The method may include designating a location of the view area on the template image. 124-. (canceled)25. A portable electronic apparatus comprising:memory to store one or more images; and display a first image via at least one portion of a display area of a display operatively coupled with the processor;', 'display a second image via the display area such that the second image is enclosed by the first image, the second image including an image corresponding to at least one object and obtained using at least one camera operatively coupled with the processor; and', 'move the second image as enclosed by the first image from a first location of the display area to a second location of the display area in response to a movement of the at least one object., 'a processor configured to26. The apparatus of claim 25 , wherein the processor is configured to:in response to an input, synthesize the first image and the second image to produce a third image.27. The apparatus of claim 26 , wherein the processor is configured to:produce the third image as the second image being located at the second location.28. The apparatus of claim 26 , wherein the processor is configured to:display the third image via the display area.29. The apparatus of claim 25 , wherein the processor is configured to:store the first image in the memory prior to the displaying of the first image; andselect the first image from the memory.30. The apparatus of claim 25 , wherein the at least one object ...

Подробнее
25-01-2018 дата публикации

Systems, Methods, And Devices For Rendering In-Vehicle Media Content Based On Vehicle Sensor Data

Номер: US20180027189A1
Принадлежит:

A system for providing media content in a vehicle includes a content component, a noise component, and a closed captioning component. The content component is configured to receive content from a content provider, wherein the content is configured for rendering by the media system of a vehicle. The noise component is configured to determine a noise level within a cabin of the vehicle. The closed captioning component is configured to, in response to determining that the noise level exceeds a threshold, trigger display of closed captioning for the content. 1. A method comprising:receiving content from a content provider, wherein the content is configured for rendering by a media system of a vehicle;detecting one or more of a current speed of the vehicle, a current engine rotation speed, a presence or number of nearby vehicles, or a current weather condition based on one or more vehicle sensors;determining a noise level within a cabin of the vehicle, wherein determining the noise level comprises determining an estimated cabin noise level based on one or more of the current speed of the vehicle, the current engine rotation speed, the presence or number of nearby vehicles, or the current weather conditions;in response to determining that the noise level exceeds a threshold, triggering display of closed captioning for the content; andrendering the content and closed captioning for the content on the media system of the vehicle.2. The method of claim 1 , wherein triggering display of closed captioning for the content comprises providing a request to the content provider for closed captioning and receiving closed captioning information corresponding to the content.3. The method of claim 1 , further comprising deactivating display of the closed captioning to the content in response to determining that the noise level is below one or more of:a first threshold comprising the threshold; ora second threshold.4. The method of claim 3 , wherein deactivating display of the closed ...

Подробнее