Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 75. Отображено 68.
03-05-2012 дата публикации

Methods and Systems for Processing a Video for Stabilization and Retargeting

Номер: US20120105654A1
Принадлежит: Google LLC

Methods and systems for processing a video for stabilization and retargeting are described. A recorded video may be stabilized by removing shake introduced in the video, and a video may be retargeted by modifying the video to fit to a different aspect ratio. Constraints can be imposed that require a modified video to contain pixels from the original video and/or to preserve salient regions. In one example, a video may be processed to estimate an original path of a camera that recorded the video, to estimate a new camera path, and to recast the video from the original path to the new camera path. To estimate a new camera path, a virtual crop window can be designated. A difference transformation between the original and new camera path can be applied to the video using the crop window to recast the recorded video from the smooth camera path.

Подробнее
18-07-2013 дата публикации

Methods and Systems for Processing a Video for Stabilization Using Dynamic Crop

Номер: US20130182134A1
Принадлежит: Google LLC

Methods and systems for processing a video for stabilization are described. A recorded video may be stabilized by removing at least a portion of shake introduced in the video. An original camera path for a camera used to record the video may be determined. A crop window size may be selected and a crop window transform may accordingly be determined. The crop window transform may describe a transform of the original camera path to a modified camera path that is smoother than the original camera path. A smoothness metric indicative of a degree of smoothness of the modified path may be determined. Based on a comparison of the smoothness metric to a predetermined threshold, for example, the crop window transform may be applied to the original video to obtain a stabilized modified video.

Подробнее
13-02-2014 дата публикации

Methods and Systems for Video Retargeting Using Motion Saliency

Номер: US20140044404A1
Принадлежит: GOOGLE INC.

Methods and systems for video retargeting and view selection using motion saliency are described. Salient features in multiple videos may be extracted. Each video may be retargeted by modifying the video to preserve the salient features. A crop path may be estimated and applied to a video to retarget each video and generate a modified video preserving the salient features. An action score may be assigned to portions or frames of each modified video to represent motion content in the modified video. Selecting a view from one of the given modified videos may be formulated as an optimization subject to constraints. An objective function for the optimization may include maximizing the action score. This optimization may also be subject to constraints to take into consideration optimal transitioning from a view from a given video to another view from another given video, for example. 1. A method comprising:receiving a plurality of videos, each video comprising a sequence of frames;determining salient features in a content of each video, wherein the salient features include features selected based on motion content of the features over the sequence of frames;determining a camera crop path for each video, wherein the camera crop path comprises a sequence of crop windows, the sequence of crop windows including the salient features;applying the sequence of crop windows for each video to the sequence of frames of each video to generate a modified video for each video including the salient features of each respective video; andselecting one of the modified videos, wherein selecting the one of the modified videos comprises performing an optimization over time to select the one of the modified videos, and wherein the optimization is subject to a constraint to limit switching between the modified videos.2. The method of claim 1 , wherein determining the camera crop path for each video comprises determining a crop window of a pre-defined size claim 1 , wherein the crop window is ...

Подробнее
13-03-2014 дата публикации

Methods and Systems for Removal of Rolling Shutter Effects

Номер: US20140071299A1
Принадлежит: GOOGLE INC.

Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels. 1. A method comprising:determining, by a computing device, in a frame of a video captured by a camera, features with a distinguishable geometric characteristic, wherein the frame includes a plurality of rows of pixels captured sequentially in time;determining, for a pixel representing a feature of the features in the frame, a corresponding pixel representing the feature in a consecutive frame in the video;determining, for a set of rows of pixels including the pixel in the frame, a projective transform based on (i) a first position of the camera at which the set of rows of pixels is captured, and (ii) a second position of the camera at which a corresponding set of rows of pixels including the corresponding pixel in the consecutive frame is captured, wherein the projective transform represents motion of the camera from the first position to the second position;determining, for the set of rows of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for ...

Подробнее
03-01-2019 дата публикации

METHODS, SYSTEMS, AND MEDIA FOR GENERATING A SUMMARIZED VIDEO WITH VIDEO THUMBNAILS

Номер: US20190005334A1
Принадлежит:

Methods, systems, and media for summarizing a video with video thumbnails are provided. In some embodiments, the method comprises: receiving a plurality of video frames corresponding to the video and associated information associated with each of the plurality of video frames; extracting, for each of the plurality of video frames, a plurality of features; generating candidate clips that each includes at least a portion of the received video frames based on the extracted plurality of features and the associated information; calculating, for each candidate clip, a clip score based on the extracted plurality of features from the video frames associated with the candidate clip; calculating, between adjacent candidate clips, a transition score based at least in part on a comparison of video frame features between frames from the adjacent candidate clips; selecting a subset of the candidate clips based at least in part on the clip score and the transition score associated with each of the candidate clips; and automatically generating an animated video thumbnail corresponding to the video that includes a plurality of video frames selected from each of the subset of candidate clips. 1. A method for summarizing videos , the method comprising:receiving, using a hardware processor, a video content item comprising a plurality of video frames;extracting, for each of the plurality of video frames of the video content item, a plurality of features;generating a plurality of candidate clips that each include a portion of the plurality of frames based on the extracted plurality of features indicating that the portion of the plurality of frames includes interesting content;selecting a first candidate clip and a second candidate clip that are adjacent candidate clips based on a transition score that includes a penalty for containing similar looking frames; andautomatically generating an animated video thumbnail corresponding to the video that includes the first candidate clip and the ...

Подробнее
10-01-2019 дата публикации

IDENTIFYING INTERESTING PORTIONS OF VIDEOS

Номер: US20190013047A1
Принадлежит:

A plurality of videos is analyzed (in real time or after the videos are generated) to identify interesting portions of the videos. The interesting portions are identified based on one or more of the people depicted in the videos, the objects depicted in the videos, the motion of objects and/or people in the videos, and the locations where people depicted in the videos are looking. The interesting portions are combined to generate a content item. 1. A method comprising:receiving a plurality of videos of an event, wherein each video originates from a camera in a plurality of cameras, wherein operation of the plurality of cameras are synchronized with each other, and wherein each video is associated with a viewpoint of the event; (i) a motion of one or more objects in corresponding portions of the first video and a motion of one or more objects in corresponding portions of the second video respectively,', '(ii) a number of objects depicted in the corresponding portions of the first video and the corresponding portions of the second video respectively, wherein a larger number of objects in a portion of the first video or the second video results in a higher saliency score for the portion of the first video or the second video; and', a type of event,', 'rules associated with the type of event,', 'a schedule associated with the event,', 'a presence of one or more objects associated with the type of event in the corresponding portions of the first video and the corresponding portions of the second video respectively,', 'a location where one or more people in an audience are looking, in the corresponding portions of the first video and the corresponding portions of the second video respectively; and, '(iii) at least one of], 'determining first saliency scores for portions of a first video of the plurality of videos and second saliency scores for portions of a second video of the plurality of videos, wherein the first saliency scores and second saliency scores are based ...

Подробнее
10-02-2022 дата публикации

Real-Time Pose Estimation for Unseen Objects

Номер: US20220044439A1
Принадлежит: Google LLC

Example embodiments allow for fast, efficient determination of bounding box vertices or other pose information for objects based on images of a scene that may contain the objects. An artificial neural network or other machine learning algorithm is used to generate, from an input image, a heat map and a number of pairs of displacement maps. The location of a peak within the heat map is then used to extract, from the displacement maps, the two-dimensional displacement, from the location of the peak within the image, of vertices of a bounding box that contains the object. This bounding box can then be used to determine the pose of the object within the scene. The artificial neural network can be configured to generate intermediate segmentation maps, coordinate maps, or other information about the shape of the object so as to improve the estimated bounding box.

Подробнее
04-02-2016 дата публикации

GENERATING COMPOSITIONS

Номер: US20160034785A1
Принадлежит: GOOGLE INC.

Implementations generally relate to generating compositional media content. In some implementations, a method includes receiving a plurality of photos from a user, and determining one or more composition types from the photos. The method also includes generating compositions from the selected photos based on the one or more determined composition types. The method also includes providing the one or more generated compositions to the user. 1. (canceled)2. A computer-implemented method to generate a composition , the method comprising:clustering a plurality of photos into one or more photo bursts based on one or more clustering criteria;identifying a particular photo burst, wherein each photo in the particular photo burst includes a face;selecting a first photo from the particular photo burst, wherein the first photo includes a first element that lacks a characteristic, the first element associated with the face;selecting a second photo from the particular photo burst, wherein the second photo includes a second element that has the characteristic, the second element associated with the face; andgenerating the composition based on the first photo and the second photo, wherein the composition includes one or more elements of the first photo excluding the first element, and the second element of the second photo.3. The computer-implemented method of claim 2 , wherein the first element is mouth claim 2 , the method further comprising determining that the first photo includes a non-smiling mouth.4. The computer-implemented method of claim 3 , wherein the second element is mouth claim 3 , the method further comprising determining that the second photo includes a smiling mouth.5. The computer-implemented method of claim 2 , wherein the first element and the second element are eyes claim 2 , and wherein the characteristic is open.6. The computer-implemented method of claim 2 , wherein generating the composition comprises:selecting the first photo as a base image for the ...

Подробнее
01-05-2014 дата публикации

SYSTEM AND METHOD FOR GROUPING RELATED PHOTOGRAPHS

Номер: US20140118390A1
Принадлежит: GOOGLE INC.

A computer-implemented method, computer program product, and computing system is provided for interacting with images having similar content. In an embodiment, a method may include identifying a plurality of photographs as including a common characteristic. The method may also include generating a flipbook media item including the plurality of photographs. The method may further include associating one or more interactive control features with the flipbook media item. 1. A computer-implemented method comprising:identifying, on a computing device, a plurality of photographs as including a common characteristic;generating, on the computing device, a flipbook media item including the plurality of photographs;associating, on the computing device, a representative photograph with a visual indicator of the flipbook media item; andassociating, on the computing device, one or more interactive control features with the flipbook media item for manually navigating the plurality of photographs.2. A computer-implemented method comprising:identifying, on a computing device, a plurality of photographs as including a common characteristic;generating, on the computing device, a flipbook media item including the plurality of photographs; andassociating, on the computing device, one or more interactive control features with the flipbook media item.3. The computer-implemented method of claim 2 , wherein the common characteristic includes inclusion of the plurality of photographs in a photo burst.4. The computer-implemented method of claim 2 , wherein the common characteristic includes a visual similarity between the plurality of photographs.5. The computer-implemented method of claim 2 , wherein generating the flipbook media item includes associating the plurality of photographs based on one of a time-wise sequence and a spatial alignment sequence.6. The computer-implemented method of claim 2 , wherein generating the flipbook media item includes associating one of the plurality of ...

Подробнее
10-03-2022 дата публикации

Scalable Real-Time Hand Tracking

Номер: US20220076433A1
Принадлежит: Google LLC

Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.

Подробнее
08-04-2021 дата публикации

Surface geometry object model training and inference

Номер: US20210104096A1
Принадлежит: Google LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network model to predict mesh vertices corresponding to a three-dimensional surface geometry of an object depicted in an image.

Подробнее
25-08-2022 дата публикации

Calibration-Free Instant Motion Tracking for Augmented Reality

Номер: US20220270290A1
Принадлежит: Google LLC

The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.

Подробнее
26-05-2016 дата публикации

Methods and systems for removal of rolling shutter effects

Номер: US20160150160A1
Принадлежит: Google LLC

Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels.

Подробнее
10-06-2021 дата публикации

Scalable Real-Time Hand Tracking

Номер: US20210174519A1
Принадлежит: Google LLC

Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.

Подробнее
18-09-2014 дата публикации

CASCADED CAMERA MOTION ESTIMATION, ROLLING SHUTTER DETECTION, AND CAMERA SHAKE DETECTION FOR VIDEO STABILIZATION

Номер: US20140267801A1
Принадлежит:

An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos. 1. A computer implemented method comprising:accessing a video;generating a plurality of tracked features for each of at least two adjacent frames of the video, the tracked features of the adjacent frames indicating an inter-frame motion of the camera;applying a plurality of motion models to the inter-frame motion of the tracked features to estimate a plurality of properties for each of the applied motion models, the motion models each representing a different type of camera motion comprising a different number of degrees of freedom (DOF);determining that one or more of the motion models are valid by comparing the properties of the motion models to corresponding thresholds;generating a camera path between the adjacent frames based on the valid motion models.2. The method of claim 1 , further comprising:generating a stabilized video by applying the camera path to the adjacent frames of the video.3. The method of claim 1 , wherein the camera path is generated independent of data describing the motion of an original camera used to capture the video.4. The method of claim 1 , wherein generating the plurality of tracked features for one of the frames comprises:applying a ...

Подробнее
06-07-2017 дата публикации

CASCADED CAMERA MOTION ESTIMATION, ROLLING SHUTTER DETECTION, AND CAMERA SHAKE DETECTION FOR VIDEO STABILIZATION

Номер: US20170195575A1
Принадлежит:

An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos. 1. A method , comprising:accessing a video;estimating, for a plurality of frames of the video, values of a plurality of degrees of freedom (DOF) of a similarity motion model, each degree of freedom representing a different camera motion of an original camera used to capture the video, the values of the DOFs representing magnitudes of the different camera motions;generating a spectrogram for each of the DOFs, each spectrogram based on the values of the DOFs over a time window comprising a plurality of adjacent frames of the video;generating a plurality of shake features based on the spectrograms;classifying the video based on the shake features; andstabilizing the video based on the classification.2. The method of claim 1 , wherein the similarity motion model comprises a lateral translation DOF claim 1 , a longitudinal translation DOF claim 1 , a scale change DOF claim 1 , and a rotation change DOF.3. The method of claim 1 , wherein the shake features are invariant with respect to a length of the video.4. The method of claim 1 , wherein the time windows of the spectrograms have overlapping coverage of at least a portion of the frames of a previous and a subsequent ...

Подробнее
20-06-2019 дата публикации

Collage of interesting moments in a video

Номер: US20190189161A1
Принадлежит: Google LLC

A computer-implemented method includes determining interesting moments in a video. The method further includes generating video segments based on the interesting moments, wherein each of the video segments includes at least one of the interesting moments from the video. The method further includes generating a collage from the video segments, where the collage includes at least two windows and wherein each window includes one of the video segments.

Подробнее
28-07-2016 дата публикации

SYSTEM AND METHOD FOR GROUPING RELATED PHOTOGRAPHS

Номер: US20160216848A1
Принадлежит: GOOGLE INC.

A computer-implemented method, computer program product, and computing system is provided for interacting with images having similar content. In an embodiment, a method may include identifying a plurality of photographs as including a common characteristic. The method may also include generating a flipbook media item including the plurality of photographs. The method may further include associating one or more interactive control features with the flipbook media item. 1identifying, on a computing device, a plurality of photographs as including a common characteristic;generating, on the computing device, a flipbook media item including the plurality of photographs;associating, on the computing device, a representative photograph with a visual indicator of the flipbook media item; andassociating, on the computing device, one or more interactive control features with the flipbook media item for manually navigating the plurality of photographs.. A computer-implemented method comprising: This disclosure relates to digital photographs and, more particularly, to interacting with groups of photographs.The use of digital photography has become an important part of daily life for many individuals. Many cellular phones now include cameras and many social networking application facilitate the sharing of digital photos among many individuals and social groups. Not only has digital photography increased the case with which photos may be shared by individuals, but the combination of digital cameras being incorporated into common every-day items, such as cellular phones, and the low relative cost of digital photography, due at least in part to the elimination of film and developing costs, have increased the number of pictures that people take. People may often take pictures of events, items, settings, or the like, that they likely would not have if they had to pay for film and developing of the pictures. Similarly, people may often take many pictures of the same scene or subject ...

Подробнее
04-07-2019 дата публикации

METHODS, SYSTEMS, AND MEDIA FOR GENERATING A SUMMARIZED VIDEO WITH VIDEO THUMBNAILS

Номер: US20190205654A1
Принадлежит:

Methods, systems, and media for summarizing a video with video thumbnails are provided. In some embodiments, the method comprises: receiving a plurality of video frames corresponding to the video and associated information associated with each of the plurality of video frames; extracting, for each of the plurality of video frames, a plurality of features; generating candidate clips that each includes at least a portion of the received video frames based on the extracted plurality of features and the associated information; calculating, for each candidate clip, a clip score based on the extracted plurality of features from the video frames associated with the candidate clip; calculating, between adjacent candidate clips, a transition score based at least in part on a comparison of video frame features between frames from the adjacent candidate clips; selecting a subset of the candidate clips based at least in part on the clip score and the transition score associated with each of the candidate clips; and automatically generating an animated video thumbnail corresponding to the video that includes a plurality of video frames selected from each of the subset of candidate clips. 1. A method for summarizing videos , the method comprising:receiving, using a hardware processor, a plurality of image content;extracting, for each of the plurality of image content, a plurality of features;generating a plurality of candidate content that each includes a portion of the plurality of image content based on the extracted plurality of features indicating that the portion of the plurality of image content includes interesting content;selecting first candidate content and second candidate content that are adjacent candidate content; andautomatically generating an animated video thumbnail that includes the first candidate content and the second candidate content.2. The method of claim 1 , wherein the plurality of image content includes a video content item comprising a plurality of ...

Подробнее
02-07-2020 дата публикации

HYBRID PLACEMENT OF OBJECTS IN AN AUGMENTED REALITY ENVIRONMENT

Номер: US20200211288A1
Принадлежит:

In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode. 1. A method , comprising:receiving, by an electronic device, data defining an augmented reality (AR) environment including a representation of a physical environment;identifying at least one of a plane representing at least a portion of a first real-world object within the physical environment or an augmented region representing at least a portion of a second real-world object within the physical environment;receiving an instruction to place an AR object within the AR environment; andin response to receiving the instruction, placing the AR object in the AR environment at least one of based on the plane using a plane-tracking mode or based on the augmented region using a region-tracking mode.2. The method of claim 1 , further comprising:placing the AR object in the AR environment at a first time based on the plane-tracking mode; andswitching, at a second time, to the region-tracking mode.3. The method of claim 1 , further comprising:placing the AR object in the AR environment at a first time based on the region-tracking mode; andswitching, at a second time, to the plane-tracking mode.4. The method of claim 1 , further comprising:maintaining a scale of the physical environment as displayed with a screen of the electronic device while changing a depth of the AR object such that a size of the AR object is changed within the screen of the electronic device.5. The method of claim 1 , further comprising:modifying a depth of the AR object within the AR environment in response to a change in size of the augmented region.6. The method of claim 1 ,wherein the augmented region is a two-dimensional region.7. The method of claim 1 ,wherein the augmented region is a two-dimensional region without a depth.8. ...

Подробнее
18-08-2016 дата публикации

CONDENSER AND METHOD OF CONDENSING VAPOUR

Номер: US20160238321A1
Принадлежит:

A condensing vessel is shown generally at and comprises a stationary outer chamber and a rotating portion in the form of an inner rotating drum or chamber both of which are generally cylindrical, and arranged concentrically. A steam inlet is provided to the inner chamber and a water outlet is provided from the outer chamber Water level sensors are provided on the outer chamber to provide a control signal to a water pump (not shown). The water pump withdraws water to a de-aerator, prior to returning the water to a hot well (not shown). The pump is regulated by a three-way valve which determines how much (if any) water is returned to chamber and how much is directed to the de-aerator, in dependence upon the water level in chamber In use, steam enters the inlet from an exhaust of a steam turbine (not shown). The inner chamber is rotating at a speed of several thousand rpm and contains a body of water that rotates with the chamber in the form of a rotating cylindrical wall of water. The first impeller forces the steam down into the chamber where it condenses into droplets of water and is thrown radially outwards towards the wall Uncondensed steam cannot exit the inner chamber as to do so it would first have to pass through the water to be able to enter a gap, labelled G, between the annular plate and the return flange In effect the water in this gap acts as a self-regulating, high-pressure water seal. 1. A condenser for condensing a vapour , the condenser comprising a condensing vessel having an inlet for the introduction of vapour and an outlet for the removal of liquid , wherein the condenser further comprises a rotating portion that is arranged in use to create a rotating body of liquid within the vessel.2. A condenser according to claim 1 , wherein the rotating portion comprises a first impeller.3. A condenser according to claim 1 , wherein the rotating portion comprises a rotating inner chamber within a non-rotating outer chamber.4. A condenser according to claim 3 ...

Подробнее
06-11-2014 дата публикации

Methods and systems for processing a video for stabilization using dynamic crop

Номер: US20140327788A1
Принадлежит: Google LLC

Methods and systems for processing a video for stabilization are described. A recorded video may be stabilized by removing at least a portion of shake introduced in the video. An original camera path for a camera used to record the video may be determined. A crop window size may be selected, a crop window transform may accordingly be determined, and the crop window transform may be applied to the original video to provide a modified video from a viewpoint of the modified motion camera path.

Подробнее
25-07-2019 дата публикации

Graphical image retrieval based on emotional state of a user of a computing device

Номер: US20190228031A1
Принадлежит: Google LLC

A computing device is described that includes a camera configured to capture an image of a user of the computing device, a memory configured to store the image of the user, at least one processor, and at least one module. The at least one module is operable by the at least one processor to obtain, from the memory, an indication of the image of the user of the computing device, determine, based on the image, a first emotion classification tag, and identify, based on the first emotion classification tag, at least one graphical image from a database of pre-classified images that has an emotional classification that is associated with the first emotion classification tag. The at least one module is further operable by the at least one processor to output, for display, the at least one graphical image.

Подробнее
20-11-2014 дата публикации

GENERATING COMPOSITIONS

Номер: US20140341482A1
Принадлежит: GOOGLE INC.

Implementations generally relate to generating compositional media content. In some implementations, a method includes receiving a plurality of photos from a user, and determining one or more composition types from the photos. The method also includes generating compositions from the selected photos based on the one or more determined composition types. The method also includes providing the one or more generated compositions to the user. 1. A method comprising:receiving a plurality of photos from a user;determining one or more composition types from the photos, wherein the one or more composition types include one or more of face compositions, high dynamic range compositions, panorama compositions, and photo booth compositions, and wherein the determining comprises clustering the photos based on one or more clustering criteria;generating one or more compositions from the selected photos based on the one or more determined composition types; andproviding the one or more generated compositions to the user.2. The method of claim 2 , wherein the clustering criteria include photos taken within a predetermined time period.3. A method comprising:receiving a plurality of photos from a user;determining one or more composition types from the photos;generating one or more compositions from the selected photos based on the one or more determined composition types; andproviding the one or more generated compositions to the user.4. The method of claim 3 , wherein the one or more composition types include face compositions.5. The method of claim 3 , wherein the one or more composition types include high dynamic range compositions.6. The method of claim 3 , wherein the one or more composition types include panorama compositions.7. The method of claim 3 , wherein the one or more composition types include photo booth compositions.8. The method of claim 3 , wherein the determining of the one or more composition types comprises clustering the photos based on one or more clustering ...

Подробнее
15-09-2016 дата публикации

CASCADED CAMERA MOTION ESTIMATION, ROLLING SHUTTER DETECTION, AND CAMERA SHAKE DETECTION FOR VIDEO STABILIZATION

Номер: US20160269642A1
Принадлежит:

An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos. 1. A computer-implemented method , comprising:accessing a video;generating a plurality of tracked features for each of at least two adjacent frames of the video, the tracked features of the adjacent frames indicating an inter-frame motion of the camera;applying a homographic model to the inter frame motion to determine a number of tracked features that are inliers matching the homographic model;applying a homographic mixture model to the inter frame motion to determine a number of tracked features that are inliers matching the homographic mixture model;determining that the number of homographic mixture inliers exceeds the number of homographic inliers by a threshold; andgenerating a stabilized video by applying the homographic mixture model to the adjacent frames of the video.2. The method of claim 1 , wherein the homographic model and the homographic mixture model each represent different types of motion having different numbers of degrees of freedom.3. The method of claim 1 , wherein determining that one of the tracked features is a homographic inlier comprises determining that the homographic model tracks the inter-frame motion of one of the tracked features to ...

Подробнее
06-08-2020 дата публикации

Calibration-Free Instant Motion Tracking for Augmented Reality

Номер: US20200250852A1
Принадлежит: Google LLC

The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.

Подробнее
06-12-2018 дата публикации

VECTOR REPRESENTATION FOR VIDEO SEGMENTATION

Номер: US20180350131A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for video segmentation. One of the methods includes receiving a digital video; performing hierarchical graph-based video segmentation on at least one frame of the digital video to generate a boundary representation for the at least one frame; generating a vector representation from the boundary representation for the at least one frame of the digital video, wherein generating the vector representation includes generating a polygon composed of at least three vectors, wherein each vector comprises two vertices connected by a line segment, from a boundary in the boundary representation; linking the vector representation to the at least one frame of the digital video; and storing the vector representation with the at least one frame of the digital video. 1. A method comprising:receiving a digital video;performing hierarchical graph-based video segmentation on at least one frame of the digital video to generate a boundary representation for the at least one frame, wherein the boundary representation includes a plurality of boundaries, and wherein each boundary of the boundary representation:encompasses a particular spatio-temporal region, wherein each spatio-temporal region corresponds to a region of the video that exhibits coherence in appearance and motion across time over a plurality of frames of the digital video,wherein the particular spatio-temporal region corresponds to one or more objects in the video frame, andwherein the boundary is at least partially shared with another boundary of a different spatio-temporal region of the video frame;generating a vector representation from the boundary representation for the at least one frame of the digital video, wherein generating the vector representation includes generating a polygon for each boundary of the boundary representation corresponding to an outline of a respective spatio-temporal region, wherein each polygon is ...

Подробнее
28-12-2017 дата публикации

COLLAGE OF INTERESTING MOMENTS IN A VIDEO

Номер: US20170372749A1
Принадлежит: GOOGLE INC.

A computer-implemented method includes determining interesting moments in a video. The method further includes generating video segments based on the interesting moments, wherein each of the video segments includes at least one of the interesting moments from the video. The method further includes generating a collage from the video segments, where the collage includes at least two windows and wherein each window includes one of the video segments. 1. A computer-implemented method to generate a collage , the method comprising:determining interesting moments in a video;generating video segments based on the interesting moments, wherein each of the video segments includes at least one of the interesting moments from the video; andgenerating a collage from the video segments, wherein the collage includes at least two windows and wherein each window includes one of the video segments.2. The method of claim 1 , further comprising:receiving a selection of one of the video segments in the collage; andcausing the video to be displayed at a time position in the video that corresponds to the selection.3. The method of claim 1 , wherein determining the interesting moments in the video includes:identifying audio in the video;identifying a type of audio in the video;generating an interest score for each type of motion in the video; anddetermining the interesting moments based on the interest score for each type of audio in the video.4. The method of claim 1 , wherein at least a first segment of the video segments in the collage is configured to play at a different frame rate than other video segments in the collage.5. The method of claim 1 , wherein:determining the interesting moments in the video includes receiving an identification of at least one of the interesting moments from a user; and determining a beginning and an end of continual motion of at least a first object in the video that appears in the video at one of the interesting moments; and', 'cutting the video into a ...

Подробнее
29-12-2022 дата публикации

AR-Assisted Synthetic Data Generation for Training Machine Learning Models

Номер: US20220415030A1
Принадлежит:

The present disclosure is directed to systems and methods for generating synthetic training data using augmented reality (AR) techniques. For example, images of a scene can be used to generate a three-dimensional mapping of the scene. The three-dimensional mapping may be associated with the images to indicate locations for positioning a virtual object. Using an AR rendering engine, implementations can generate an and orientation. The augmented image can then be stored in a machine learning dataset and associated with a label based on aspects of the virtual object. 1. A computer-implemented method for generating training data , the method comprising:obtaining, by one or more computing devices, a three-dimensional model of a virtual object;obtaining, by the one or more computing devices, data comprising one or more image frames that depict a scene;determining, by the one or more computing devices, a position and an orientation for the virtual object within the scene;generating, by the one or more computing devices and using an augmented reality rendering engine, an augmented image that depicts the virtual object within the scene at the position and the orientation;storing, by the one or more computing devices, the augmented image as a training image within a machine learning training dataset; andassociating, by the one or more computing devices, a training label with the training image in the machine learning training dataset, wherein the training label at least one of: identifies the virtual object, indicates the position of the virtual object within the scene, or indicates the orientation of the virtual object within the scene.2. The computer-implemented method of claim 1 , further comprising:training, by the one or more computing devices, a machine-learned model on the machine learning training dataset including the training image and the training label.3. The computer-implemented method of claim 1 , wherein determining the position and orientation comprises: ...

Подробнее
01-05-2014 дата публикации

Grouping related photographs

Номер: CA2885504A1
Принадлежит: Google LLC

A computer-implemented method, computer program product, and computing system is provided for interacting with images having similar content. In an embodiment, a method may include identifying a plurality of photographs as including a common characteristic. The method may also include generating a flipbook media item including the plurality of photographs. The method may further include associating one or more interactive control features with the flipbook media item.

Подробнее
01-11-2022 дата публикации

Motion stills experience

Номер: US11487407B1
Принадлежит: Google LLC

The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.

Подробнее
16-06-2022 дата публикации

Object Pose Estimation and Tracking Using Machine Learning

Номер: US20220191542A1
Принадлежит: Google LLC

A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional ( 2 D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2 D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional ( 3 D) coordinates of the respective vertex based on the first 2 D coordinates and (ii) second 3 D coordinates of the respective vertex based on the second 2 D coordinates.

Подробнее
24-11-2011 дата публикации

Boiler cleaning apparatus and method

Номер: WO2011144946A1
Автор: Matthias Grundmann
Принадлежит: Bioflame Limited

Cleaning apparatus for boiler tube end plates in a fire- tube boiler comprises a pair of rotatable shafts (34) located in generally; central fire tubes (32), The shafts pass from the exhaust end to the inlet end, At the exhaust end the shafts (34) are mounted on bearings (36) on an exterior wail of the boiler where motors (38) are arranged to rotate the shafts (34) in an oscillatory fashion back and forth through substantially 180° each. At end (34fc) of the shaft the shaft is coupled to a supply of hot water from a hot well reservoir (22b) supplied independently from the condenser (22), At the inlet end (34a) of the shaft a spray head (40) is mounted on each shaft so as to direct water {or steam or a combination thereof) supplied through the rotatable shaft (34) onto the tube end plate (32a) via a plurality of nozzles. Between the pair of spray heads (40) they cover approximately the entire circular surface of the tube end plates (32a) in their respective oscillatory motions.- As an alternative the spray heads, or a single spray head, may rotate substantially continuously through 360 degrees.

Подробнее
12-03-2019 дата публикации

Methods, systems, and media for generating a summarized video with video thumbnails

Номер: US10229326B2
Принадлежит: Google LLC

Methods, systems, and media for summarizing a video with video thumbnails are provided. In some embodiments, the method comprises: receiving a plurality of video frames corresponding to the video and associated information associated with each of the plurality of video frames; extracting, for each of the plurality of video frames, a plurality of features; generating candidate clips that each includes at least a portion of the received video frames based on the extracted plurality of features and the associated information; calculating, for each candidate clip, a clip score based on the extracted plurality of features from the video frames associated with the candidate clip; calculating, between adjacent candidate clips, a transition score based at least in part on a comparison of video frame features between frames from the adjacent candidate clips; selecting a subset of the candidate clips based at least in part on the clip score and the transition score associated with each of the candidate clips; and automatically generating an animated video thumbnail corresponding to the video that includes a plurality of video frames selected from each of the subset of candidate clips.

Подробнее
26-07-2006 дата публикации

Vertical fuel processor

Номер: GB2422332A
Автор: Matthias Grundmann
Принадлежит: BIOFLAME FUELS Ltd, Bioflame Ltd

A fuel processor 10 is provided for processing a carbon-based fuel, the processor comprising a fuel processing chamber 22 which is arranged, in use, substantially upright. In use, fuel to be processed moves downwards through the chamber, and heated processing gas A moves upwards through the chamber. The fuel may be discharged from the processor through a processed fuel outlet 20 with the rate of discharge controlled by a discharge unit. By varying the rate of discharge of the discharge unit, the residence time of the fuel in the processor can be finely controlled. Furthermore, the fuel moves downwards through the processor under the influence of gravity, ensuring that the first product in is the first product out.

Подробнее
14-09-2021 дата публикации

Collage of interesting moments in a video

Номер: US11120835B2
Принадлежит: Google LLC

A computer-implemented method includes determining interesting moments in a video. The method further includes generating video segments based on the interesting moments, wherein each of the video segments includes at least one of the interesting moments from the video. The method further includes generating a collage from the video segments, where the collage includes at least two windows and wherein each window includes one of the video segments.

Подробнее
25-11-2020 дата публикации

Hybrid placement of objects in an augmented reality environment

Номер: EP3740849A1
Принадлежит: Google LLC

In a general aspect, a method can include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, and changing tracking of an AR object within the AR environment between region-tracking mode and plane-tracking mode.

Подробнее
03-05-2012 дата публикации

Methods and systems for processing a video for stabilization and retargeting

Номер: WO2012058442A1
Принадлежит: GOOGLE INC.

Methods and systems for processing a video for stabilization and retargeting are described. A recorded video may be stabilized by removing shake introduced in the video, and a video may be retargeted by modifying the video to fit to a different aspect ratio. Constraints can be imposed that require a modified video to contain pixels from the original video and/or to preserve salient regions. In one example, a video may be processed to estimate an original path of a camera that recorded the video, to estimate a new camera path, and to recast the video from the original path to the new camera path. To estimate a new camera path, a virtual crop window can be designated. A difference transformation between the original and new camera path can be applied to the video using the crop window to recast the recorded video from the smooth camera path.

Подробнее
13-06-2012 дата публикации

Gas-borne particle remover

Номер: EP2461886A1
Автор: Matthias Grundmann
Принадлежит: Bioflame Ltd

A particle remover for removing particles from a gas flow, including a baffle arranged to intercept the gas flow, such that when particles within the gas flow impinge on the baffle they are transformed to a liquid state.

Подробнее
02-11-2023 дата публикации

Systems and Methods for Object Detection Including Pose and Size Estimation

Номер: US20230351724A1
Принадлежит: Google LLC

The present disclosure is directed to systems and methods for performing object detection and pose estimation in 3D from 2D images. Object detection can be performed by a machine-learned model configured to determine various object properties. Implementations according to the disclosure can use these properties to estimate object pose and size.

Подробнее
15-08-2023 дата публикации

Motion stills experience

Номер: US11726637B1
Принадлежит: Google LLC

The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items and updating the user interface to comprise a control element and a second portion, wherein the first and second portions are concurrently displayed and are each scrollable along a different axis, and the second portion displays image content of the set and the control element enables a user to initiate the creation of the video based on the set of selected media items; and creating the video based on video content of the set of selected media items.

Подробнее
10-10-2023 дата публикации

Scalable real-time hand tracking

Номер: US11783496B2
Принадлежит: Google LLC

Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.

Подробнее
30-11-2023 дата публикации

Motion stills experience

Номер: US20230384911A1
Принадлежит: Google LLC

The technology disclosed herein includes a user interface for viewing and combining media items into a video. An example method includes presenting a user interface that displays media items in a first portion of the user interface; receiving user input in the first portion that comprises a selection of a first media item; upon receiving the user input, adding the first media item to a set of selected media items in a second portion of the user interface, and presenting a selectable control element in the second portion of the user interface, wherein the control element enables a user to initiate an operation pertaining to the creation of the video based on the set of selected media items, and creating the video based on video content of the set of selected media items.

Подробнее
04-07-2023 дата публикации

Efficient convolutional neural networks and techniques to reduce associated computational costs

Номер: US11694087B2
Принадлежит: Google LLC

A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.

Подробнее
10-01-2018 дата публикации

Methods and systems for processing a video for stablization using dynamic crop

Номер: EP2805482B1
Принадлежит: Google LLC

Подробнее
12-10-2023 дата публикации

Calibration-Free Instant Motion Tracking for Augmented Reality

Номер: US20230326073A1
Принадлежит: Google LLC

The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.

Подробнее
26-11-2014 дата публикации

Methods and systems for processing a video for stablization using dynamic crop

Номер: EP2805482A1
Принадлежит: Google LLC

Methods and systems for processing a video for stabilization are described. A recorded video may be stabilized by removing at least a portion of shake introduced in the video. An original camera path for a camera used to record the video may be determined. A crop window size may be selected and a crop window transform may accordingly be determined. The crop window transform may describe a transform of the original camera path to a modified camera path that is smoother than the original camera path. A smoothness metric indicative of a degree of smoothness of the modified path may be determined. Based on a comparison of the smoothness metric to a predetermined threshold, for example, the crop window transform may be applied to the original video to obtain a stabilized modified video.

Подробнее
08-08-2023 дата публикации

Calibration-free instant motion tracking for augmented reality

Номер: US11721039B2
Принадлежит: Google LLC

The present disclosure provides systems and methods for calibration-free instant motion tracking useful, for example, for rending virtual content in augmented reality settings. In particular, a computing system can iteratively augment image frames that depict a scene to insert virtual content at an anchor region within the scene, including situations in which the anchor region moves relative to the scene. To do so, the computing system can estimate, for each of a number of sequential image frames: a rotation of an image capture system that captures the image frames; and a translation of the anchor region relative to an image capture system, thereby providing sufficient information to determine where and at what orientation to render the virtual content within the image frame.

Подробнее
27-05-2021 дата публикации

Ar-assisted synthetic data generation for training machine learning models

Номер: WO2021101527A1
Принадлежит: Google LLC

The present disclosure is directed to systems and methods for generating synthetic training data using augmented reality (AR) techniques. For example, images of a scene can be used to generate a three-dimensional mapping of the scene. The three-dimensional mapping may be associated with the images to indicate locations for positioning a virtual object. Using an AR rendering engine, implementations can generate an augmented image depicting the virtual object within the scene at a position and orientation. The augmented image can then be stored in a machine learning dataset and associated with a label based on aspects of the virtual object.

Подробнее
05-01-2022 дата публикации

Methods and systems for processing a video for stablization using dynamic crop

Номер: EP3334149B1
Принадлежит: Google LLC

Подробнее
26-09-2023 дата публикации

Object pose estimation and tracking using machine learning

Номер: US11770551B2
Принадлежит: Google LLC

A method includes receiving a video comprising images representing an object, and determining, using a machine learning model, based on a first image of the images, and for each respective vertex of vertices of a bounding volume for the object, first two-dimensional (2D) coordinates of the respective vertex. The method also includes tracking, from the first image to a second image of the images, a position of each respective vertex along a plane underlying the bounding volume, and determining, for each respective vertex, second 2D coordinates of the respective vertex based on the position of the respective vertex along the plane. The method further includes determining, for each respective vertex, (i) first three-dimensional (3D) coordinates of the respective vertex based on the first 2D coordinates and (ii) second 3D coordinates of the respective vertex based on the second 2D coordinates.

Подробнее
21-12-2023 дата публикации

Scalable Real-Time Hand Tracking

Номер: US20230410329A1
Принадлежит: Google LLC

Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions.

Подробнее
31-08-2023 дата публикации

High-definition video segmentation for web-based video conferencing

Номер: WO2023163757A1
Принадлежит: Google LLC

Systems and methods for image segmentation can include downloading a machine-learned image segmentation model to be utilized while in the web browser. For example, a user can access a web service, which can initiate the download of a software package including the machine-learned image segmentation model. The image segmentation model can then be utilized for segmenting image data obtained with a user computing device.

Подробнее
06-05-2021 дата публикации

Efficient Convolutional Neural Networks and Techniques to Reduce Associated Computational Costs

Номер: US20210133508A1
Принадлежит: Google LLC

A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.

Подробнее
24-02-2015 дата публикации

Spatio-temporal segmentation for video

Номер: US8965124B1
Принадлежит: Google LLC

A video is segmented to produce volumetric video regions. Descriptors are created for the video regions. A region graph is created for the video, where the region graph has weighted edges incident to video regions and the weight of an edge is calculated responsive to the descriptors of the video regions incident to the edge. The region graph is segmented responsive to the weights of the edges incident to the video regions to produce a new region graph having new volumetric video regions comprised of merged video regions of the first region graph. The descriptions of the region graphs are stored in a data storage.

Подробнее
04-09-2008 дата публикации

Residence chamber for products of combustion

Номер: WO2008104766A1
Автор: Matthias Grundmann
Принадлежит: Bioflame Limited

A residence chamber (1) for use in treating products of combustion is disclosed. The chamber comprises a first chamber (10), having a gas inlet (12) and a gas outlet (14), and a baffle (16), The baffle is located in the first chamber part and is arranged so as to cause gas entering the chamber through the inlet to travel initially in a first helical path and subsequently in a second helical path. The second helical path is in a second opposed axial direction and is inside the first helical path. The gas is arranged to exit the first chamber part through the outlet.

Подробнее
09-12-2009 дата публикации

Residence chamber for products of combustion

Номер: EP2129967A1
Автор: Matthias Grundmann
Принадлежит: BIOFLAME FUELS Ltd

A residence chamber (1) for use in treating products of combustion is disclosed. The chamber comprises a first chamber (10), having a gas inlet (12) and a gas outlet (14), and a baffle (16), The baffle is located in the first chamber part and is arranged so as to cause gas entering the chamber through the inlet to travel initially in a first helical path and subsequently in a second helical path. The second helical path is in a second opposed axial direction and is inside the first helical path. The gas is arranged to exit the first chamber part through the outlet.

Подробнее
07-11-2024 дата публикации

Cross-platform distillation framework

Номер: US20240370717A1
Принадлежит: Google LLC

A method for a cross-platform distillation framework includes obtaining a plurality of training samples. The method includes generating, using a student neural network model executing on a first processing unit, a first output based on a first training sample. The method also includes generating, using a teacher neural network model executing on a second processing unit, a second output based on the first training sample. The method includes determining, based on the first output and the second output, a first loss. The method further includes adjusting, based on the first loss, one or more parameters of the student neural network model. The method includes repeating the above steps for each training sample of the plurality of training samples.

Подробнее
06-02-2018 дата публикации

Cascaded camera motion estimation, rolling shutter detection, and camera shake detection for video stabilization

Номер: US09888180B2
Принадлежит: Google LLC

An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.

Подробнее
09-05-2017 дата публикации

Tracking and distorting image regions

Номер: US09646222B1
Принадлежит: Google LLC

Systems and methods are disclosed for tracking and distorting regions within a media item. A method includes identifying a region in a first frame of a media item using a first user specified position, calculating based on tracking data an estimated position of the region within a second frame of the media item and an estimated position of the region within a third frame of the media item, adjusting based on user input the estimated position of the region within the second frame to a second user specified position, blending the estimated position within the third frame based on the user specified position of the second frame to generate a blended position within the third frame, and modifying the third frame to distort the region underlying the blended position.

Подробнее
25-04-2017 дата публикации

Cascaded camera motion estimation, rolling shutter detection, and camera shake detection for video stabilization

Номер: US09635261B2
Принадлежит: Google LLC

An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.

Подробнее
11-04-2017 дата публикации

Generating compositions

Номер: US09619732B2
Принадлежит: Google LLC

Implementations generally relate to generating compositional media content. In some implementations, a method includes receiving a plurality of photos from a user, and determining one or more composition types from the photos. The method also includes generating compositions from the selected photos based on the one or more determined composition types. The method also includes providing the one or more generated compositions to the user.

Подробнее
21-03-2017 дата публикации

System and method for utilizing motion fields to predict evolution in dynamic scenes

Номер: US09600760B2
Принадлежит: Disney Enterprises Inc

Described herein are methods, systems, apparatuses and products for utilizing motion fields to predict evolution in dynamic scenes. One aspect provides for accessing active object position data including positioning information of a plurality of individual active objects; extracting a plurality of individual active object motions from the active object position data; constructing a motion field using the plurality of individual active object motions; and using the motion field to predict one or more points of convergence at one or more spatial locations that active objects are proceeding towards at a future point in time. Other embodiments are disclosed.

Подробнее
24-01-2017 дата публикации

Methods and systems for processing a video for stabilization using dynamic crop

Номер: US09554043B2
Принадлежит: Google LLC

Methods and systems for processing a video for stabilization are described. A recorded video may be stabilized by removing at least a portion of shake introduced in the video. An original camera path for a camera used to record the video may be determined. A crop window size may be selected, a crop window transform may accordingly be determined, and the crop window transform may be applied to the original video to provide a modified video from a viewpoint of the modified motion camera path.

Подробнее
31-05-2016 дата публикации

Methods and systems for removal of rolling shutter effects

Номер: US09357129B1
Принадлежит: Google LLC

Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels.

Подробнее