Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 6978. Отображено 199.
10-02-2015 дата публикации

СПОСОБ И СИСТЕМА ДИНАМИЧЕСКОЙ ГЕНЕРАЦИИ ТРЕХМЕРНЫХ АНИМАЦИОННЫХ ЭФФЕКТОВ

Номер: RU2540786C2

Изобретение относится к средствам генерации анимационных эффектов на трехмерном дисплее. Техническим результатом является обеспечение автоматического создания трехмерных анимационных эффектов на изображении в режиме реального времени. В способе выбирают визуальный мультимедиа объект для показа, детектируют на мультимедиа объекте области интереса и вычисляют их признаки, строят трехмерную сцену, содержащую данный мультимедиа объект, создают в сцене набор трехмерных визуальных объектов анимационного эффекта в соответствии с областями интереса и их признаками, выполняют последовательные преобразования трехмерных объектов в пространстве сцены и самой сцены. 2 н. и 8 з.п. ф-лы, 11 ил.

Подробнее
10-01-2009 дата публикации

СПОСОБ СИНТЕЗИРОВАНИЯ ДИНАМИЧЕСКИХ ВИРТУАЛЬНЫХ КАРТИНОК

Номер: RU2343543C2
Автор: СЮН Пу (CN)

Изобретение относится к технологиям обработки изображений, в частности к способу синтезирования динамических виртуальных картинок. Технический результат заключается в обеспечении улучшенной услуги для пользователя. Способ содержит следующие действия: а) синтезирующая серверная сторона получает пользовательский запрос на синтезирование виртуальной картинки, отправленный пользователем, и, в соответствии с информацией пользовательского запроса, получает файлы изображений всех компонентов для синтезирования виртуальной картинки; b) поочередно считывают соответствующие файлы изображений компонента в соответствии с номерами слоев каждого компонента и трансформируют полученные файлы изображений компонентов в заданный формат; с) синтезируют компонент, отформатированный на шаге b), и предварительно считанный файл шаблона для формирования промежуточной картинки; d) определяют, все ли компоненты синтезированы; если все компоненты синтезированы, то переходят к шагу е); в противном случае - переходят ...

Подробнее
05-09-2022 дата публикации

СПОСОБ И СИСТЕМА АВТОМАТИЗИРОВАННОГО ПОСТРОЕНИЯ ВИРТУАЛЬНОЙ СЦЕНЫ НА ОСНОВАНИИ ТРЕХМЕРНЫХ ПАНОРАМ

Номер: RU2779245C1

Изобретение относится к области вычислительной техники. Технический результат заключается в повышение точности построения виртуальной сцены на основании трехмерных панорам с помощью 3D-камеры в условиях недостаточной освещенности. Технический результат достигается за счет получения данных в точках съемки, содержащих фотоизображения окружающего пространства и облако точек, характеризующее геометрию пространства, в каждой из упомянутых точек съемки; определения для каждой точки степени освещенности фотоизображений, характеризующейся пороговым значением T; осуществления привязки полученных данных съемки в каждой из точек, причем привязка данных каждой последующей точки выполняется к данным предыдущей точки, и для точек, степень освещенности фотоизображений которых ниже порогового значения Т, привязка к предыдущей точке осуществляется только на основании данных облаков точек; осуществления построения виртуальной сцены на основании выполненной привязки точек съемки. 2 н. и 2 з.п. ф-лы, 6 ил.

Подробнее
14-11-2019 дата публикации

ТАКТИЛЬНО КОРРЕЛИРОВАННЫЕ ГРАФИЧЕСКИЕ ЭФФЕКТЫ

Номер: RU2706182C1

Изобретение относится к средствам формирования анимированного искажения на дисплее. Техническим результатом является обеспечение анимированных искажений на дисплее, перемещающихся к пользователю и от пользователя, просматривающего экран, формирующих визуальный волновой эффект. Способ включает в себя разделение дисплея электронного устройства на множество областей, задаваемых вершинами, вычисление изменяющихся во времени положений для каждой вершины относительно измерения z и компоновку экрана для дисплея, который включает в себя изменяющиеся положения для каждой вершины для формирования анимированного искажения на дисплее. При этом изменяющиеся во времени положения изменяются между увеличенными и уменьшенными, формируя искажение на дисплее так, чтобы выглядеть перемещающимися к и от пользователя, просматривающего экран. Причем на этапе вычисления используют синусоидальную функцию поршневого штока, длина которого изменяется со временем, при этом поршень соединен с поршневым штоком, который ...

Подробнее
19-08-2024 дата публикации

УПРАВЛЕНИЕ ЭЛЕКТРИЧЕСКИМИ КОМПОНЕНТАМИ С ПОМОЩЬЮ ГРАФИЧЕСКИХ ФАЙЛОВ

Номер: RU2825019C1
Принадлежит: ИЛЛУМИНА, ИНК. (US)

Изобретение относится к области управления электрическими компонентами. Технический результат заключается в обеспечении возможности использования графических файлов для управления работой светоизлучающих диодов в динамических рисунках с обеспечением бесшовных изменений между рисунками на аппаратном уровне без применения переходных замираний. Способ включает доступ к первому файлу анимационной графики, второму файлу анимационной графики, файлу шаблонной графики и заданному пороговому значению электрического параметра для массива электрических компонентов, выполнение попиксельных вычислений на комбинациях строк из первого файла анимационной графики, второго файла анимационной графики и файла шаблонной графики, причем каждое из попиксельных вычислений включает в себя определение первого масштабируемого значения пикселей умножением первого значения пикселей анимации из первого файла анимационной графики на первое значение пикселей шаблона из файла шаблонной графики, определение второго масштабируемого ...

Подробнее
20-07-2014 дата публикации

СПОСОБ И СИСТЕМА ДИНАМИЧЕСКОЙ ГЕНЕРАЦИИ ТРЕХМЕРНЫХ АНИМАЦИОННЫХ ЭФФЕКТОВ

Номер: RU2013101015A
Принадлежит:

... 1. Способ динамической генерации трехмерных анимационных эффектов на трехмерном дисплее, предусматривающий выполнение следующих операций:- выбирают визуальный мультимедиаобъект для показа;- детектируют на мультимедиаобъекте области интереса и вычисляют их признаки;- строят трехмерную сцену, содержащую данный мультимедиаобъект;- создают в данной трехмерной сцене набор трехмерных визуальных объектов анимационного эффекта в соотвествии с областями интереса и их признаками;- выполняют последовательные преобразования трехмерных объектов в пространстве сцены и самой сцены таким образом, что в результате визуализации сцены создается трехмерный анимационный эффект.2. Способ по п.1, отличающийся тем, что при выборе визуального мультимедиаобъекта выбор выполняют из двумерных и трехмерных изображений и видеопоследовательностей.3. Способ по п.1, отличающийся тем, что в процессе детектирования на визуальном мультимедиаобъекте областей интереса выполняют предварительную обработку данного объекта, включающую ...

Подробнее
12-09-2001 дата публикации

Image processing using parametric models

Номер: GB0002360183A
Принадлежит:

A method and apparatus are provided for determining a set of appearance parameters representative of the appearance of an object within an image. The system employs a parametric model which relates appearance parameters to corresponding image data of the object as well as a number of matrices, each of which relates a change in the appearance parameters to an image error between image data of the object from the image and image data generated from the appearance parameters and the parametric model. The system uses these matrices to iteratively modify an initial estimate of the appearance parameters until convergence has been reached. The parametric model is preferably obtained through a principal component analysis of shape and texture data extracted from a large number of training images.

Подробнее
18-03-2009 дата публикации

Display of fluid body simulation using concentration spheres

Номер: GB2452809A
Принадлежит:

Thin films or sharp edges in a fluid body are expressed in the display of a particle-based fluid-body simulation. The surface construction method is a method applied to a method for rendering calculation results on the screen of a display device using data that are obtained by calculation of a fluid-body simulation based on a particle method executed by a CPU or the like. The method has a first stage of allocating a concentration sphere to a particle that is a calculation object and computing an implicit function curved surface, and computing a plurality of base vertices (V0) for creating a fluid-body surface by the implicit function curved surface; and a second stage that is executed at least one time for adjusting a surface sharpness for each of the plurality of base vertices (V0) for creating the fluid-body surface that is computed in the first stage.

Подробнее
11-12-2019 дата публикации

Computer Animation Method and Apparatus

Номер: GB0002574460A
Принадлежит:

A method of creating a computer animation of a model such as a skeletal model, the method comprises: a) providing a plurality of keyframes that describe a pose of a model at a plurality of times; b) identifying a plurality of intermediate frame times in an interval between the plurality of the keyframes; c) for each intermediate frame time, identifying a weighting value associated with the intermediate frame time; and d) for each intermediate frame time, creating an intermediate frame by a combination of interpolation between adjacent keyframes and inverse kinematics (IK); wherein e) the extent to which each of interpolation and inverse kinematics determines the content of each intermediate frame is dependent upon the weighting value associated with that intermediate frame. Typically the model includes a representation of a foot walking on terrain or head tracking. Preferably float-streams having a series of weighted values describe how much IK blending is used within framed animations.

Подробнее
12-09-2012 дата публикации

Real-time eulerian water simulation using a restricted tall cell grid

Номер: GB0201213354D0
Автор:
Принадлежит:

Подробнее
21-10-2015 дата публикации

Computer implemented methods and systems for generating virtual body models for garment fit visualisation

Номер: GB0201515814D0
Автор:
Принадлежит:

Подробнее
21-10-2015 дата публикации

Computer implemented methods and systems for generating virtual body models for garment fit visualisation

Номер: GB0201515817D0
Автор:
Принадлежит:

Подробнее
06-12-2017 дата публикации

Generation of three dimensional fashion objects by drawing inside a virtual reality environment

Номер: GB0201717379D0
Автор:
Принадлежит:

Подробнее
15-05-2011 дата публикации

FOUR-DIMENSIONAL RECONSTRUCTION OF SEVERAL PHASES PERIODIC MOVEMENT EXHIBITING REGIONS

Номер: AT0000506662T
Принадлежит:

Подробнее
29-10-2020 дата публикации

VIRTUAL REALITY SIMULATION

Номер: AU2020244454A1
Принадлежит: AJ PARK

VIRTUAL REALITY SIMULATION The present disclosure generally relates to virtual reality simulation, and more specifically, in some implementations, to devices, systems, and methods for use in a virtual reality sports simulation. A system for virtual reality simulation may include an accessory (e.g., one or more of a bat, a glove, or a helmet) for interacting with a virtual reality environment. The accessory may provide the user with haptic feedback that emulates sensations that the user would experience when playing a live-action sport to provide the user with a more meaningful and realistic experience when playing a virtual reality game. Further, virtual reality simulations disclosed herein may include incorporating data from a live-action event (e.g., a live-action sporting event) into a virtual reality environment to provide a user with a realistic experience. WO 20181237256 PCTIUS2O18/038978 zo to <<01 ~~gI~ Z _ _ _ _ _ Lu Li ...

Подробнее
20-05-2021 дата публикации

Cartoonify Image Detection Using Machine Learning

Номер: AU2021101766A4
Принадлежит:

Our invention Cartoonify Image Detection Using Machine Learning is a face recognition process based on machine learning and the artificial neural network. It includes an independpaxel collection unit that produces pixels with facial features patterns and chooses a fixed number of selfpixels from the produced pixels. The innovation concerns an Eigen filtering device which filters an input image with selected pixels. The innovation often includes a product of a face recognition and a determining device receiving the identification product producing the final picture recognition result under machine learning 3-D avatar that resembles the physical presence of a person taken in more than one input picture or video frame can be produced automously. In the editing world, the invention is often an avatar that can be personalised and used in different apps, but are not restricted in games, social networking and video conferencing. The innovation includes the following steps: cartoon formation process ...

Подробнее
23-01-2020 дата публикации

METHOD IMPLEMENTED BY COMPUTER FOR THE CREATION OF CONTENTS COMPRISING SYNTHESIS IMAGES

Номер: CA0003102192A1
Принадлежит:

L'invention concerne un procédé mis en uvre par ordinateur pour la création de manière collaborative et dans un processus unifié temps-réel, de contenus d'animation, caractérisé en ce qu'il comprend d'une part des étapes de production et de diffusion de contenus d'animation en images de synthèse destinées à être mises en uvre grâce à l'action combinée d'une pluralité de terminaux et d'un serveur central, et d'autre part des étapes de gestion de ces contenus d'animation adaptées pour permettre au serveur central de centraliser et gérer l'ensemble des données produites au stade des étapes de production.

Подробнее
25-09-2007 дата публикации

ANIMATION OF THREE-DIMENSIONAL CHARACTERS ALONG A PATH

Номер: CA0002369664C
Принадлежит: AVID TECHNOLOGY, INC.

A character is represented in a character generator as a set of polygons. The character may be manipulated using three-dimensional animation techniques. A code for a character may be used to access a set of curves defining the outline of the character. This set of curves is transformed into a set of polygons. The set of polygons may be rendered as a three-dimensional object. The set of polygons may be created by converting the curves into sets of connected line segments and then tessellating the polygon defined by the line segments. Animation properties are represented using a normalized scale along a path or over time. Animation may be provided in a manner that is independent of the spatial and temporal resolution of the video to which it is applied. Such animation may be applied to characters defined by a set of polygons. Various three-dimensional spatial transformations, lighting effects and other colorizations may be provided. A user interface for editing a character string may provide ...

Подробнее
17-09-2019 дата публикации

METHOD FOR SCRIPTING INTER-SCENE TRANSITIONS

Номер: CA0002669409C
Принадлежит: EVERYSCAPE INC, EVERYSCAPE, INC.

A method for authoring and displaying a virtual tour of a three-dimensional space which employs transitional effects simulating motion. An authoring tool is provided for interactively defining a series of locations in the space for which two-dimensional images, e.g., panoramas, photographs, etc., are available. A user identifies one or more view directions for a first-person perspective viewer for each location. For pairs of locations in the series, transitional effects are identified to simulate smooth motion between the pair of locations. The authoring tool stores data corresponding to the locations, view directions and transitional effects for playback on a display. When the stored data is accessed, a virtual tour of the space is created that includes transitional effects simulating motion between locations. The virtual tour created can allow a viewer to experience the three-dimensional space in a realistic manner.

Подробнее
18-02-2014 дата публикации

LARGE MESH DEFORMATION USING THE VOLUMETRIC GRAPH LAPLACIAN

Номер: CA0002606794C
Принадлежит: MICROSOFT CORPORATION

... ²²²Large mesh deformation using the volumetric graph Laplacian is described. In ²one aspect, information is received from a user, wherein the information ²indicates how an original mesh is to be deformed. The original mesh is then ²deformed based on the information and application of a volumetric differential ²operator to a volumetric graph generated from the original mesh.² ...

Подробнее
01-10-2010 дата публикации

MICROWAVE ABLATION SYSTEM WITH USER-CONTROLLED ABLATION SIZE AND METHOD OF USE

Номер: CA0002698862A1
Принадлежит:

Disclosed is a system and method for enabling user preview and control of the size and shape of an electromagnetic energy field used in a surgical procedure. The disclosed system includes a selectively activatable source of microwave surgical energy in the range of about 900 mHz to about 5 gHz in operable communication with a graphical user interface and a database. The database is populated with data corresponding to the various surgical probes, such as microwave ablation antenna probes, that may include a probe identifier, the probe diameter, operational frequency of the probe, ablation length of the probe, ablation diameter of the probe, a temporal coefficient, a shape metric, and the like. The probe data is graphically presented on the graphical user interface where the surgeon may interactively view and select an appropriate surgical probe. Three-dimensional views of the probe(s) may be presented allowing the surgeon to interactively rotate the displayed image.

Подробнее
30-12-2011 дата публикации

SEAMLESS FRACTURE IN A PRODUCTION PIPELINE

Номер: CA0002742427A1
Принадлежит:

Systems and processes for rendering fractures in an object are provided. In one example, a surface representation of an object may be converted into a volumetric representation of the object. The volumetric representation of the object may be divided into volumetric representations of two or more fragments. The volumetric representations of the two or more fragments may be converted into surface representations of the two or more fragments. Additional information associated with attributes of adjacent fragments may be used to convert the volumetric representations of the two or more fragments into surface representations of the two or more fragments. The surface representations of the two or more fragments may be displayed.

Подробнее
20-06-2013 дата публикации

SYSTEM FOR FILMING A VIDEO MOVIE

Номер: CA0002856464A1
Принадлежит:

Système de tournage de film vidéo dans un espace réel comprenant : - une caméra de tournage (9), - un capteur (16), - un module informatisé de repérage (27) pour déterminer la localisation de la caméra de tournage, - un écran de contrôle (15), - un module informatisé de composition (32) pour générer sur l'écran de contrôle (15) une image composite de l'image réelle et d'une projection d'une image virtuelle, générée selon les données de localisation de la caméra de tournage (9).

Подробнее
04-12-2013 дата публикации

System and method for generating a video

Номер: CN103428446A
Принадлежит:

The application relates to a system and a method for generating a video. The method of generating an image enables animating an avatar on a device with limited processing capabilities. The method includes receiving, on a first computing device, a first image; sending, on a data interface of the first computing device, the first image to a server; receiving, on the data interface and from the server, shape data corresponding to an aspect of the first image; and generating, by a processor of the first computing device, a primary output image based at least upon the shape data and avatar data.

Подробнее
29-10-2019 дата публикации

Information processing apparatus and recording medium

Номер: CN0110392206A
Автор:
Принадлежит:

Подробнее
12-10-2018 дата публикации

Rendering settings chart

Номер: CN0104050701B
Автор:
Принадлежит:

Подробнее
24-10-2007 дата публикации

Method for synthesizing dynamic virtual images

Номер: CN0100345164C
Автор: XIONG PU, PU XIONG
Принадлежит:

Подробнее
08-02-2017 дата публикации

METHOD FOR PROVIDING SHOES KIT

Номер: KR101704696B1
Автор: NA, YONG HWAN
Принадлежит: NA, YONG HWAN

The present invention relates to a method for providing a shoes kit and a system using the same. According to the present invention, a consumer can purchase customized shoes by purchasing semi-finished shoes with various pieces of visual information of the semi-finished shoes. Specifically, once an image by semi-finished shoes is given to the consumer, the consumer can visually check the shoes and change some part of the shoes to suit the consumer. Therefore, the consumer can efficiently purchase the shoes with a wide range of choice. COPYRIGHT KIPO 2017 (S110) Selection information providing step of transmitting a product information list including a finished product image for at least one product matching an item selected by a client (S120) Semi-finished product image providing step of transmitting a semi-finished product image including at least one among a two dimensional image and a three dimensional image for each portion of a corresponding product to the client, when any one product ...

Подробнее
20-03-2020 дата публикации

Auto-generation system of 4D Building Information Modeling animation

Номер: KR0102091721B1
Принадлежит:

Подробнее
02-07-2018 дата публикации

3차원 공간에서의 장면 편집 방법 및 시스템

Номер: KR0101860313B1
Автор: 유쉬엔 리
Принадлежит: 프래미 인코퍼레이티드

... 본 발명은 3차원 공간 장면을 편집하는 방법 및 시스템에 관한 것으로, 사용자가 스마트 장치를 조작하여 직관적인 방식으로 3차원 공간에서 편집 위치를 결정할 수 있게 하고, 장치 내의 센서를 이용하여 3차원 공간의 위치 신호를 획득하며, 스크린을 터치하여 해당 객체를 직접 편집하여 객체의 위치, 크기, 회전각도 또는 회전방향을 기록할 뿐만 아니라, 소프트웨어적인 방법을 이용하여 3차원 장면 내의 이동 및 변화를 기록하여, 하나의 3차원 장면 내에서 동영상을 생성한다. 재생시에는 사용자가 스마트 장치를 조작하는 공간적 위치에 따라, 3차원 장면 내의 하나 이상의 객체에 대하여 해당 공간 내에서의 이동 및 변화를 재현하여, 사용자가 3차원 공간 내에서의 장면을 직관적으로 편집하고자 하는 목적을 달성할 수 있다.

Подробнее
14-10-2019 дата публикации

Номер: KR0102031647B1
Автор:
Принадлежит:

Подробнее
31-10-2013 дата публикации

TERMINAL FOR PROVIDING AUGMENTED REALITY

Номер: KR0101324336B1
Автор:
Принадлежит:

Подробнее
05-12-2017 дата публикации

METHOD, SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM FOR AUTHORING ANIMATION

Номер: KR1020170133294A
Автор: JEON, JAE WOONG
Принадлежит:

According to an aspect of the present invention, provided is a method for authoring an animation, which includes the steps of: providing a motion window by corresponding to at least one motion performed by a character included in the animation; and determining a direction which the character faces or a direction in which the at least one motion is performed when the character performs the at least one motion in the animation. Accordingly, the present invention can provide an animation authoring tool to easily control an interaction between the motions. COPYRIGHT KIPO 2017 (110) Moving line managing part (120) Motion sequence managing part (130) Motion window managing part (140) Motion attribute managing part (150) Motion interaction managing part (160) Animation managing part (170) Communication part (180) Control part ...

Подробнее
07-10-2020 дата публикации

IN-VEHICLE AVATAR PROCESSING APPARATUS AND METHOD OF CONTROLLING THE SAME

Номер: KR1020200114054A
Автор:
Принадлежит:

Подробнее
04-01-2007 дата публикации

LARGE MESH DEFORMATION USING THE VOLUMETRIC GRAPH LAPLACIAN

Номер: WO2007002453A2
Принадлежит:

Large mesh deformation using the volumetric graph Laplacian is described. In one aspect, information is received from a user, wherein the information indicates how an original mesh is to be deformed. The original mesh is then deformed based on the information and application of a volumetric differential operator to a volumetric graph generated from the original mesh.

Подробнее
09-03-2006 дата публикации

IMAGING SYSTEM FOR DISPLAYING A STRUCTURE OF TEMPORALLY CHANGING CONFIGURATION

Номер: WO2006025005A2
Принадлежит:

The present invention relates to an imaging system for displaying image data representative of a structure of temporally changing configuration. The imaging system comprises display rendering means for processing data representative of the changing configuration of the structure and rendering a display comprising a reference image of the structure in a preselected configuration and superposing with the reference image a changing image of the structure which changes in configuration with respect to time.

Подробнее
23-10-2014 дата публикации

VISUAL POSITIONING SYSTEM

Номер: WO2014170758A2
Принадлежит:

A visual positioning system for indoor locations with associated content is provided herein. The system has a map creator and a viewer. The map creator maps the indoor location by acquiring plans thereof, detects paths through the location and associates with the paths frames relating to objects and views of the paths. The viewer allows a user to orient in the indoor location by locating the user with respect to a path. The viewer enhances GPS/WIFI/3G data by matching user-captured images with the frames, and then interactively displaying the user data from the mapped paths with respect to user queries.

Подробнее
30-06-2005 дата публикации

A METHOD AND SYSTEM FOR SYSTEM VISUALIZATION

Номер: WO2005059696A2
Принадлежит:

In one embodiment, a computer-implemented method comprises receiving a time period indication selected by a user for a group of objects including a plurality of data points. The plurality of data points are mapped to features selected by the user. Key frames are generated for the group of objects for each time interval of the time period. Relations can be inserted between any pair of objects. The group of objects and relations are rendered using the key frames over the time period to generate an animation. An object position is offset during animation according to an elasticity variable associated with the user-selected relations. Positions in between key frames are interpolated to provide smooth rendering between variable time frames. In an alternate embodiment, the object position is offset during animation according to features of the group of objects selected by the user, with or without the elasticity variable.

Подробнее
28-11-2002 дата публикации

METHOD FOR PROCESSING AN IMAGE SEQUENCE OF A DISTORTABLE 3-D OBJECT TO YIELD INDICATIONS OF THE OBJECT WALL DEFORMATIONS IN THE TIME

Номер: WO0002095683A3
Принадлежит:

An image processing method for processing a sequence of images of a distortable 3-D Object, each image being registered at a corresponding image instant within the interval of time of the sequence having steps to construct and display an image of said 3-D Object represented with regions, each region showing a quantified indication relating to its maximal contraction or relaxation within said interval of time. Each region of the constructed and displayed image is attributed a respective color of a color coded scale that is function of the calculated quantified indication relating to the maximum of contraction or relaxation of said region. The quantified indications may be the instant when a face or region has had its maximum of contraction or relaxation; or the phase value corresponding to said maximum; or the delay to attain said maximum.

Подробнее
15-11-2012 дата публикации

EFFICIENT METHOD OF PRODUCING AN ANIMATED SEQUENCE OF IMAGES

Номер: WO2012154502A3
Автор: ANDERSON, Erik
Принадлежит:

A computer-based method of generating an animated sequence of images eliminates inefficiencies associated with a lighting process. The method begins with the provision of a frame for the animation sequence. The frame includes at least one asset, which may be a character, background, or other object. The frame is rendered to thereby produce a set of images each based upon a different lighting condition. The frame is then composited during which a subset of the images are selected from the set and then adjusted. Settings such as intensity and color balance are adjusted for each of the subset of images.

Подробнее
08-03-2007 дата публикации

SYSTEM AND METHOD FOR COLLECTING AND MODELING OBJECT SIMULATION DATA

Номер: WO000002007028090A3
Принадлежит:

A system and method for collecting and modeling simulated movement data on a graphical display is provided. Simulated movement data from a simulation is accessed from a database. The simulated movement data comprises each location of a object on a graphical display for multiple points in time of the simulation. A three-dimensional representation is associated with the object and the three-dimensional representation is displayed at each location on the graphical display for each point in time of the simulation.

Подробнее
01-11-2007 дата публикации

LAYERING METHOD FOR FEATHER ANIMATION

Номер: WO000002007124042A3
Принадлежит:

A method of animating feather elements includes: specifying initial positions for a skin surface and for feather elements; specifying positions for the skin surface at an animated time; determining a feather-ordering sequence for placing the feather elements on the skin surface; determining positions for skirt elements that provide spatial extensions for the skin surface at the animated time; determining positions for feather-proxy elements that provide spatial extensions for the feather elements at the animated time; and determining positions for the feather elements at the animated time by extracting the feather elements from the feather-proxy elements. The feather-proxy elements are determined from the skirt elements according to the feather-ordering sequence, and the feather-proxy elements satisfy a separation criterion for avoiding intersections between the feather-proxy elements.

Подробнее
24-05-2012 дата публикации

A METHOD OF DISPLAYING READABLE INFORMATION ON A DIGITAL DISPLAY

Номер: WO2012066190A4
Автор: KOIVUSALO, Esko
Принадлежит:

This invention is aimed at a method, a programme, by which an animated impression of three-dimensional information space is created bringing forth a reading architecture on digital screens in which the presented readable information appears three-dimensionally and dynamically to the reader's field of vision.

Подробнее
07-05-2019 дата публикации

Accessory for virtual reality simulation

Номер: US0010279269B2
Принадлежит: Centurion VR, LLC, CENTURION VR LLC

The present disclosure generally relates to virtual reality simulation, and more specifically, in some implementations, to devices, systems, and methods for use in a virtual reality sports simulation. A system for virtual reality simulation may include an accessory (e.g., one or more of a bat, a glove, or a helmet) for interacting with a virtual reality environment. The accessory may provide the user with haptic feedback that emulates sensations that the user would experience when playing a live-action sport to provide the user with a more meaningful and realistic experience when playing a virtual reality game. Further, virtual reality simulations disclosed herein may include incorporating data from a live-action event (e.g., a live-action sporting event) into a virtual reality environment to provide a user with a realistic experience.

Подробнее
20-03-2008 дата публикации

Image processing device and information recording medium

Номер: US20080068387A1
Принадлежит: Kabushiki Kaisha Sega

An image processing device for realizing more realistic pictures of explosions in video game devices and the like. Objects displaying such pictures of explosions are formed of spherical polygons (R1, R2, R3, . . . ) and planar polygons (S1, S2, S3, . . . ). Pictures of explosions are realized by alternately arranging these spherical polygons and planar polygons with the lapse in time. Preferably, pictures of polygons are realized by arranging the spherical polygons in layers on the boundary of the planar polygons.

Подробнее
19-11-1996 дата публикации

3-dimensional animation generating apparatus and a method for generating a 3-dimensional animation

Номер: US0005577175A1

A 3-dimensional animation generating apparatus includes an image data storing section, a view point information input section, an image supervising section, an image data selecting section, a view point coordinate transforming section, an output image drawing section, a motion addressing section, a shadow area detecting section, and a shadow area supervising section. The image data storing section stores part image data and background image data in association with 3-dimensional coordinates of vertexes included in the image. The image data selecting section searches the image data by a unit of part. The apparatus further comprises a mechanism which, when information related to an observing view point is provided, determines which part image data exists in the visible area, transforms the visible part image data from the observing view point by view point coordinate transformation, and draws the resultant image data in a common drawing area. In addition, the apparatus further searches and ...

Подробнее
03-10-2002 дата публикации

Method, apparatus, storage medium, program, and program product for generating image data of virtual space

Номер: US20020140696A1
Принадлежит: NAMCO LTD.

A method for generating realistic image data of particle system objects locations of which change as time passes, in a light operation load and a little storage capacity. The method for generating image data of a virtual space viewed from a predetermined view point, comprises: providing a particle system object group comprising at least one particle system object in the virtual space, according to a predetermined rule, continuously or intermittently; determining a displacement point in the virtual space; moving the displacement point in a predetermined direction as time passes; and moving the particle system object group on the basis of the displacement point.

Подробнее
15-12-1998 дата публикации

Apparatus and method for geometric morphing

Номер: US5850229A
Автор:
Принадлежит:

A method of geometric morphing between a first object having a first shape and a second object having a second shape. The method includes the steps of generating a first Delaunay complex corresponding to the first shape and a second Delaunay complex corresponding to the second shape and generating a plurality of intermediary Delaunay complexes defined by a continuous family of mixed shapes corresponding to a mixing of the first shape and the second shape. The method further includes the steps of constructing a first skin corresponding to the first Delaunay complex and a second skin corresponding to the second Delaunay complex and constructing a plurality of intermediary skins corresponding to the plurality of intermediary Delaunay complexes. The first skin, second skin and plurality of intermediary skins may be visually displayed on an output device.

Подробнее
02-04-2013 дата публикации

2D imposters for simplifying processing of plural animation objects in computer graphics generation

Номер: US0008411092B2

The technology herein involves use of 2D imposters to achieve seemingly 3D effects with high efficiency where plural objects such as animated characters move together such as when one character follows or carries another character. A common 2D imposter or animated sprite is used to image and animate the plural objects in 2D. When the plural objects are separated in space, each object can be represented using its respective 3D model. However, when the plural objects contact one another, occupy at least part of the same space, or are very close to one other (e.g., as would arise in a situation when the plural objects are moving together in tandem), the animation system switches from using plural respective 3D models to using a common 2D model to represent the plural objects. Such use of a common 2D model can be restricted in some implementations to situations where the user's viewpoint can be restricted to be at least approximately perpendicular to the plane of 2D model, or the 2D surface ...

Подробнее
26-01-2010 дата публикации

Polynomial encoding of vertex data for use in computer animation of cloth and other materials

Номер: US0007652670B2

An alternative to cloth simulation in which a plurality of different poses for a material are established, and then each component of each vertex position of the material is encoded into a polynomial by using corresponding vertices in the plurality of different poses for the material. The vertices are encoded relative to a neutral bind pose. The polynomial coefficients are calculated offline and then stored. At runtime, the poses are interpolated by using key variables which are input into the polynomials as different states, for example the turning speed of the player wearing the material, which may comprise a cloth jersey. The bind pose vertices are transformed into world space using the character skeleton. A smooth interpolation is achieved, and the polynomials can encode a large number of pose-meshes in a few constants, which reduces the amount of data that must be stored.

Подробнее
06-02-2014 дата публикации

SCRIPTED STEREO CURVES FOR STEREOSCOPIC COMPUTER ANIMATION

Номер: US20140036036A1
Принадлежит: DreamWorks Animation LLC

A computer-implemented method for determining a user-defined stereo effect for a computer-animated film sequence. A stereo-volume value for a timeline of the film sequence is obtained, wherein the stereo-volume value represents a percentage of parallax at the respective time entry. A stereo-shift value for the timeline is also obtained, wherein the stereo-shift value represents a distance across one of: an area associated with a sensor of a pair of stereoscopic cameras adapted to create the film sequence; and a screen adapted to depict a stereoscopic image of the computer-generated scene. A script-adjusted near-parallax value and a script-adjusted far-parallax value are calculated.

Подробнее
02-11-2021 дата публикации

Illumination effects from luminous inserted content

Номер: US0011164367B2
Принадлежит: Google LLC, GOOGLE LLC

Systems and methods for generating illumination effects for inserted luminous content, which may include augmented reality content that appears to emit light and is inserted into an image of a physical space. The content may include a polygonal mesh, which may be defined in part by a skeleton that has multiple joints. Examples may include generating a bounding box on a surface plane for the inserted content, determining an illumination center point location on the surface plane based on the content, generating an illumination entity based on the bounding box and the illumination center point location, and rendering the illumination entity using illumination values determined based on the illumination center point location. Examples may also include determining illumination contributions values for some of the joints, combining the illumination contribution values to generate illumination values for pixels, and rendering another illumination entity using the illumination values.

Подробнее
02-01-2020 дата публикации

SYSTEMS AND METHODS FOR AUTHORING CROSS-BROWSER HTML 5 MOTION PATH ANIMATION

Номер: US20200005532A1
Принадлежит: GOOGLE LLC

The present disclosure provides systems and methods for implementations of motion paths via pure CSS3 and HTML5, working in all major browsers and requiring no JavaScript. For each motion path degree of freedom (e.g., x translation), the system may insert an additional element into the document object model (DOM) to host its animation. In some implementations, the system may apply an optimization process to fit CSS3 keyframes rules that approximate the ideal motion path trajectory to a predetermined tolerance while minimizing the storage footprint. In addition to supporting CSS3 motion paths, this authoring model retains the ability to supply arbitrary standard CSS3 animations to transform channels, which allows users to, e.g., animate the scale and rotation of an element independent of its progress along a motion path. 1. A method for generating cross-browser compatible animations , comprising:receiving, by a computing device, a web page comprising an element to be animated, the web page including a document object model (DOM) tree having a node corresponding to the element;receiving, by the computing device, a motion path for the element comprising a plurality of degrees of freedom;inserting into the DOM tree, by the computing device, a first parent node having the node corresponding to the element as a child, the first parent node corresponding to a first degree of freedom of the plurality of degrees of freedom; andinserting into the DOM tree, by the computing device, a second parent node having the first parent node as a child, the second parent node corresponding to a second degree of freedom of the plurality of degrees of freedom.2. The method of claim 1 , further comprising iteratively inserting into the DOM tree additional parent nodes for each additional degree of freedom of the plurality of degrees of freedom.3. The method of claim 1 , further comprising:receiving, by the computing device, an input scale factor for the element;inserting a first scale ...

Подробнее
27-08-2009 дата публикации

MESH TRANSFER FOR SHAPE BLENDING

Номер: US2009213138A1
Принадлежит:

Techniques are disclose that may assist animators or other artists working with models. Information from a plurality of meshes in a collection may be blended or combined using correspondences between pairs of the meshes. Meshes in the collection may include different topologies and geometries. The combined information can be used to create combinations of data that reflect new topologies, geometries, scalar fields, hair styles, or the like that may be transferred to a mesh of new or existing models.

Подробнее
03-01-2019 дата публикации

Methods and Apparatus for Tracking A Light Source In An Environment Surrounding A Device

Номер: US20190005675A1
Принадлежит:

Methods and apparatus for tracking a light source in an environment surrounding a device. In an exemplary embodiment, a method includes analyzing an image of an environment surrounding a device to detect a light source and calculating a location of the light source relative to the device. The method also includes receiving motion data corresponding to movement of the device, and adjusting the location of the light source based on the motion data. In an exemplary embodiment, an apparatus includes an image sensor that acquires an image of an environment surrounding a device, and a motion tracking element that outputs motion data that corresponds to motion of the device. The apparatus also includes a tracker that analyzes the image to detect a light source, calculates a location of the light source relative to the device, and adjusts the location of the light source based on the motion data.

Подробнее
16-12-2003 дата публикации

Image processing device and information recording medium

Номер: US0006664965B1

An image processing device for realizing more realistic pictures of explosions in video game devices and the like. Objects displaying such pictures of explosions are formed of spherical polygons (R1, R2, R3, . . . ) and planar polygons (S1, S2, S3, . . . ). Pictures of explosions are realized by alternately arranging these spherical polygons and planar polygons with the lapse in time. Preferably, pictures of polygons are realized by arranging the spherical polygons in layers on the boundary of the planar polygons.

Подробнее
12-01-2021 дата публикации

Artistic representation of digital data

Номер: US0010891766B1
Принадлежит: Google LLC, GOOGLE LLC

A system and method is provided for generating a modified Cartesian representation of received data. In some aspects, a Cartesian graph may be transformed to form a modified Cartesian representation by connecting a first end and second end of the Cartesian graph. In further aspects, a pattern may be overlaid over the modified Cartesian representation to produce an artistic representation.

Подробнее
13-08-2015 дата публикации

IMAGE VIEWING APPLICATION AND METHOD FOR ORIENTATIONALLY SENSITIVE DISPLAY DEVICES

Номер: US20150228116A1
Автор: DORIAN AVERBUCH
Принадлежит:

A system and method for presenting three-dimensional image volume data utilizing an orientationally-sensitive display device whereby the image volume is navigable simply by tilting, raising and lowering the display device. Doing so presents an image on the screen that relates to the angle and position of the display device such that the user gets the impression that the device itself is useable as a window into the image volume, especially when the device is placed on or near the source of the image data, such as a patient.

Подробнее
27-11-2014 дата публикации

METHOD AND DEVICE FOR DISPLAYING CHANGED SHAPE OF PAGE

Номер: US20140347369A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Exemplary embodiments disclose a method and device for displaying a changed shape of a page. The method includes: receiving a user touch input on the page; calculating a virtual touch force which acts on a first node on the page based on the user touch input; calculating a virtual spring force which acts on the first node by at least one virtual spring which is connected to the first node based on the calculated virtual touch force; calculating a virtual rod force which acts on the first node by at least one virtual rod which is connected to the first node based on the calculated virtual touch force; and moving the first node based on the virtual touch force, the virtual spring force and the virtual rod force.

Подробнее
03-07-2012 дата публикации

Floating transitions

Номер: US0008212809B2

A computer implemented method and apparatus for floating object transitions. In one embodiment, tracking data identifying a location of an avatar in relation to a range of an object in a virtual universe is received. The range comprises a viewable field. In response to the tracking data indicating an occurrence of a trigger condition, a set of flotation rules associated with the trigger condition is identified. An optimal location and orientation of the object is identified for each flotation action in a set of flotation actions associated with the set of flotation rules. The set of flotation actions are initiated to float the object above a surface. The object changes the location and orientation of the object in accordance with the set of flotation actions associated with the set of flotation rules.

Подробнее
25-12-2014 дата публикации

Shopper Helper

Номер: US20140379524A1
Принадлежит:

In one embodiment, a method includes monitoring an action of an individual or a certain consumer and maintaining a database stored in a memory personal to the consumer. The database can have an indication of preferences of the consumer and a purchase history of the individual consumer. The database can be based on the monitored action of the consumer. The method can further include providing a suggested product to the consumer based on the maintained database.

Подробнее
03-12-2015 дата публикации

Equivalent Lighting For Mixed 2D and 3D Scenes

Номер: US20150348316A1
Принадлежит: APPLE INC.

Systems, methods and program storage devices are disclosed, which cause one or more processing units to: obtain one or more two-dimensional components and one or more three-dimensional components; convert the pixel color values of the two-dimensional components into luminance values; create height maps over the two-dimensional components using the converted luminance values; calculate a normal vector for each pixel in each of two-dimensional components; and cause one or more processing units to render three-dimensional lighting effects on the one or more two-dimensional components and one or more three-dimensional components in a mixed scene, wherein the calculated normal vectors are used as the normal maps for the two-dimensional components, the pixel color values are used as the texture maps for the two-dimensional components, and the one or more three-dimensional components are rendered in the scene according their respective depth values, textures, and/or vertices—along with the one ...

Подробнее
16-12-2021 дата публикации

AUGMENTED REALITY DISPLAY DEVICE AND PROGRAM RECORDING MEDIUM

Номер: US20210390785A1
Принадлежит: SQUARE ENIX CO., LTD.

Provided is an augmented reality display technology capable of more entertaining a user. An augmented reality display device 10 includes an imaging unit 13, a special effect execution unit 11b, and a display unit 14. The imaging unit 13 acquires a background image of a real world. When a plurality of models for a specific combination are present in a virtual space, the special effect execution unit 11b executes a special effect corresponding to the combination of the models. The display unit 14 displays the models together with the background image based on the special effect.

Подробнее
17-03-2016 дата публикации

TECHNIQUES AND WORKFLOWS FOR COMPUTER GRAPHICS ANIMATION SYSTEM

Номер: US20160078662A1
Принадлежит:

The disclosed implementations describe techniques and workflows for a computer graphics (CG) animation system. In some implementations, systems and methods are disclosed for representing scene composition and performing underlying computations within a unified generalized expression graph with cycles. Disclosed are natural mechanisms for level-of-detail control, adaptive caching, minimal re-compute, lazy evaluation, predictive computation and progressive refinement. The disclosed implementations provide real-time guarantees for minimum graphics frame rates and support automatic tradeoffs between rendering quality, accuracy and speed. The disclosed implementations also support new workflow paradigms, including layered animation and motion-path manipulation of articulated bodies.

Подробнее
01-01-2013 дата публикации

Methods and apparatus for differentially controlling degrees of freedom of an object

Номер: US0008345004B1
Принадлежит: Pixar, PIXAR, KASS MICHAEL, TREZEVANT WARREN

An input device for controlling an object includes a joystick and a modal switch. A user may use the modal switch to select a subset of degrees of freedom of the object. The joystick may then be used to control a change over time of the selected subset, where the change over time is functionally depend on both a motion of the joystick and a state of the selected subset. A method for controlling an object via the input device is also provided. The method includes receiving inputs indicating a selection by the modal switch of a subset of degrees of freedom of the object, and a motion of the joystick. A configuration of the selected subset is then caused to be changed based on the motion of the joystick and a state of the selected subset.

Подробнее
26-02-2010 дата публикации

Роlуnоmiаl еnсоding оf vеrtех dаtа fоr usе in соmputеr аnimаtiоn оf сlоth аnd оthеr mаtеriаls

Номер: US0025251925B2

Аn аltеrnаtivе tо сlоth simulаtiоn in whiсh а plurаlitу оf diffеrеnt pоsеs fоr а mаtеriаl аrе еstаblishеd, аnd thеn еасh соmpоnеnt оf еасh vеrtех pоsitiоn оf thе mаtеriаl is еnсоdеd intо а pоlуnоmiаl bу using соrrеspоnding vеrtiсеs in thе plurаlitу оf diffеrеnt pоsеs fоr thе mаtеriаl. Тhе vеrtiсеs аrе еnсоdеd rеlаtivе tо а nеutrаl bind pоsе. Тhе pоlуnоmiаl соеffiсiеnts аrе саlсulаtеd оfflinе аnd thеn stоrеd. Аt runtimе, thе pоsеs аrе intеrpоlаtеd bу using kеу vаriаblеs whiсh аrе input intо thе pоlуnоmiаls аs diffеrеnt stаtеs, fоr ехаmplе thе turning spееd оf thе plауеr wеаring thе mаtеriаl, whiсh mау соmprisе а сlоth jеrsеу. Тhе bind pоsе vеrtiсеs аrе trаnsfоrmеd intо wоrld spасе using thе сhаrасtеr skеlеtоn. А smооth intеrpоlаtiоn is асhiеvеd, аnd thе pоlуnоmiаls саn еnсоdе а lаrgе numbеr оf pоsе-mеshеs in а fеw соnstаnts, whiсh rеduсеs thе аmоunt оf dаtа thаt must bе stоrеd.

Подробнее
09-01-2024 дата публикации

Creating action shot video from multi-view capture data

Номер: US0011869135B2
Принадлежит: Fyusion, Inc.

A three-dimensional representation of a scene captured in an action shot base video may be determined. The three-dimensional representation may identify a camera pose. A representation of an object may be determined from a multi-view representation of the object that includes images of the object and that is navigable in one or more dimensions. An action shot video of the scene that includes a rendering of the object determined based on the representation and the camera pose may be generated.

Подробнее
23-05-2024 дата публикации

AUTOMATIC ARRANGEMENT OF PATTERNS FOR GARMENT SIMULATION USING NEURAL NETWORK MODEL

Номер: US20240169632A1
Принадлежит:

An automatic arrangement method and device may receive pattern information for each pattern including shapes and sizes of patterns constituting a garment. Arrangement points at which the patterns are to be initially arranged on a three-dimensional (3D) avatar are predicted by applying the pattern information for each pattern to a neural network model trained to classify and arrange the patterns based on confidence scores calculated based on the pattern information. The patterns are arranged on the 3D avatar based on the arrangement points.

Подробнее
21-07-2004 дата публикации

Wire harness design aiding apparatus, method and computer readable recording medium storing program

Номер: EP0001439476A2
Принадлежит:

A method of simulating the movement of cloth and a computer-readable medium storing a program which executes the method of simulating the movement of cloth, wherein cloth deformation by compressive force is simulated by an immediate buckling model when the compressive force is applied to two extremities of a deformation unit which models the cloth, cloth deformation by stretching is simulated by a spring model, and hysteresis phenomenon of cloth is simulated by using spring-slips for modeling. The immediate buckling model is based on a model of the present invention in which a deformation unit is not contracted by compressive force and generates an immediate bending deformation. As for the compressive force and stretching, simulation is implemented by separate models, hysteresis phenomenon of cloth is simulated by use of spring-clips for modeling, whereby cloth characteristics can be well reflected and the buckling instability of cloth caused by compressive force can be solved, thereby ...

Подробнее
12-11-2008 дата публикации

Номер: JP0004180065B2
Автор:
Принадлежит:

Подробнее
18-05-2011 дата публикации

Номер: JP0004684238B2
Автор:
Принадлежит:

Подробнее
04-12-2019 дата публикации

Номер: RU2018119471A3
Автор:
Принадлежит:

Подробнее
20-06-2008 дата публикации

СПОСОБ СИНТЕЗИРОВАНИЯ ДИНАМИЧЕСКИХ ВИРТУАЛЬНЫХ КАРТИНОК

Номер: RU2006143545A
Автор: СЮН Пу (CN)
Принадлежит:

... 1. Способ синтезирования динамических виртуальных картинок, содержащий следующие действия:a) синтезирующая серверная сторона получает пользовательский запрос, отправленный пользователем, и, в соответствии с информацией пользовательского запроса, получает файлы изображений всех компонентов для синтезирования виртуальной картинки;b) поочередно считывают соответствующие файлы изображений компонента в соответствии с номерами слоев и трансформируют полученные файлы изображений компонентов в заданный формат;c) синтезируют компонент, отформатированный на шаге b), и предварительно синтезированную промежуточную картинку;d) определяют, все ли компоненты синтезированы; если все компоненты синтезированы, то записывают синтезированную виртуальную картинку в файл изображения виртуальной картинки; в противном случае - возвращаются к шагу b).2. Способ по п.1, в котором до чтения всех файлов изображений компонентов дополнительно содержится шаг чтения файла шаблона.3. Способ по п.1, в котором шаг с) содержит ...

Подробнее
10-04-2006 дата публикации

ПРЕМЕЩЕНИЕ ВИРТУАЛЬНОГО ОБЪЕКТА В ВИРТУАЛЬНОЙ ОКРУЖАЮЩЕЙ СРЕДЕ БЕЗ ВЗАИМНЫХ ПОМЕХ МЕЖДУ ЕГО СОЧЛЕНЕННЫМИ ЭЛЕМЕНТАМИ

Номер: RU2004131049A
Принадлежит:

... 1. Способ перемещения, выполнением последовательности элементарных перемещений, в виртуальном пространстве (13) виртуального сочлененного объекта (10), содержащего совокупность сочлененных элементов (11), связанных между собой совокупностью сочленений (12), с определением относительных положений сочлененных элементов (11) углами сочленений в соответствии со степенями свободы, причем способ включает следующие этапы: вычисляют расстояние взаимодействия между данным сочлененным элементом (11с) и другими сочлененными элементами (11) сочлененного объекта (10); определяют по указанному расстоянию взаимодействия первую точку (Р1), принадлежащую данному сочлененному элементу (11с), и вторую точку (Р2), принадлежащую одному из других сочлененных элементов (11) сочлененного объекта; определяют по указанным первой и второй точкам уникального вектора извлечения; отводят данный сочлененный элемент (11с) от других сочлененных элементов (11) сочлененного объекта при помощи движения, определенного в соответствии ...

Подробнее
24-02-2025 дата публикации

Способ тренажерной подготовки и диагностики когнитивно-моторной функции человека-оператора

Номер: RU2835317C1

Изобретение относится к области медицины, оно предназначено для тренажерной подготовки и диагностики когнитивно-моторной функции человека-оператора. Предложен способ, в котором испытуемому предъявляют на мониторе для тестового перемещения управляемый объект (УО) заданного размера и формы, а также заданное количество мобильных объектов (МО) заданного размера и формы, которые перемещают по монитору испытуемого программно прямолинейными отрезками с заданной скоростью. Испытуемый выявляет закономерность их перемещения и переводит УО в свободные от маршрутов МО места, уклоняясь от столкновений с МО и границами заданной зоны действия УО. Перемещение УО по полю монитора испытуемый осуществляет при помощи джойстика. Размером, формой зоны действия УО, а также размером УО, количеством, скоростью, размером, цветом, контрастностью и характером перемещения МО формируют уровень когнитивно-моторной нагрузки в каждом тесте или развивающем упражнении. Параметры и характеристики зоны действия управляемого ...

Подробнее
14-10-2021 дата публикации

Reproduktionsgerät, Analyseunterstützungssystem und Reproduktionsverfahren

Номер: DE112020000512T5
Принадлежит: KOMATSU MFG CO LTD, KOMATSU LTD.

Eine Steuervorrichtung für Arbeitsmaschinen enthält eine Positionsbestimmungs-Empfangseinheit, die so konfiguriert ist, dass sie die Bestimmung einer Position in Bezug auf ein auf einem Anzeigefeld angezeigtes Zustandsbild identifiziert, und eine Bildschirmsteuereinheit, die so konfiguriert ist, dass sie eine Bildschirmsteuerung gemäß einem an der identifizierten Position angezeigten Bild unter Teilbildern, die Teile des Zustandsbildes bilden, durchführt.

Подробнее
19-11-2015 дата публикации

Prozessvisualisierungsvorrichtung und Verfahren zum Anzeigen von Prozessen wenigstens einer Maschine oder Anlage

Номер: DE102014209367A1
Принадлежит:

Die Erfindung betrifft eine Prozessvisualisierungsvorrichtung (1) und ein Verfahren zum Anzeigen von Prozessen wenigstens einer Maschine oder Anlage. Die erfindungsgemäße Vorrichtung (1) umfasst eine Animationseinrichtung (2), die zum Erstellen einer 3D-Prozessanimation (16) ausgestaltet ist unter Verwendung eines die Maschine oder Anlage (11) darstellenden 3D-Modells (12) und von Prozessdaten (15) der Maschine oder Anlage. Um die Prozesse besser erkennen und darstellen zu können, ist erfindungsgemäß vorgesehen, dass die Vorrichtung (1) eine holografische Anzeigeeinrichtung (3) aufweist, mit der die 3D-Prozessanimation (16) als ein 3D-Hologramm (17) anzeigbar ist.

Подробнее
12-08-2015 дата публикации

Computer implemented methods and systems for generating virtual body models for garment fit visualisation

Номер: GB0002523030A
Принадлежит:

A virtual body model of a person is created with a small number of measurements and a single photograph and combined with one or more images of garments. The virtual body model represents a realistic representation of the users body and is used for visualizing photo-realistic fit visualizations of garments, hairstyles, make-up, and / or other accessories. The virtual garments are created from layers based on photographs of real garment from multiple angles captured by a video camera. Furthermore the virtual body model is used in multiple embodiments of manual and automatic garment, make-up, and, hairstyle recommendations, such as, from channels, friends, and fashion entities. The virtual body model is sharable for, as example, visualization and comments on looks. Furthermore it is also used for enabling users to buy garments that fit other users, suitable for gifts or similar. The implementation can also be used in peer-to-peer online sales where garments can be bought with the knowledge ...

Подробнее
23-11-2016 дата публикации

3D scene co-ordinate capture & storage

Номер: GB0002538612A
Принадлежит:

Editing and rendering video content across one or more user devices, comprising: one or more computing devices 100; a central server. A first computing device 100: defines a first model of a three dimensional object S104 to be rendered in the video content; assigns S106 a plurality of reference points to the first model; for the plurality of frames of the video content, transformation data representing the position or change of the reference points is determined S108 and recorded in a data file S110. At the first, or further, computing device: there is a user interface for editing the transformation data in the data file, so as to change the shape of the model; the edited data file is transferred to the central server. At the central server: rendering the video content based on the received edited data file, for playback across one or more user devices. Editing the transformation data may comprise annotating the data with text, drawings, audio, images or video data. The rendered video content ...

Подробнее
28-08-2002 дата публикации

Method and apparatus for creating motion illusion

Номер: GB0002372685A
Принадлежит:

A data processing system provides high performance three-dimensional graphics. In one embodiment a processor of the data processing system performs a computer algorithm that creates a motion illusion regarding an object being displayed in a computer graphics scene by drawing multiple images of the object and varying the application of an attribute, such as transparency, color, intensity, reflectivity, fill, texture, size, and/or position including depth, to the images in a manner that provides the object with an illusion of motion between a first position and a second position. Also described are an integrated circuit for implementing the motion illusion algorithm and a computer-readable medium storing a data structure for implementing the motion illusion algorithm.

Подробнее
15-07-2015 дата публикации

Computer implemented methods and systems for generating virtual body models for garment fit visualisation

Номер: GB0201509162D0
Автор:
Принадлежит:

Подробнее
17-07-2019 дата публикации

Animation production system

Номер: GB0201907769D0
Автор:
Принадлежит:

Подробнее
08-02-2006 дата публикации

A method of generating a trajectory-based game of chance on a gaming machine

Номер: GB0000600005D0
Автор:
Принадлежит:

Подробнее
28-08-2019 дата публикации

Video recording and playback systems and methods

Номер: GB0002571306A
Принадлежит:

Video recording method comprising: recording video image sequence output by a video game; recording sequences of depth buffer values and virtual camera positions; recording in-game events and their respective positions; associating the depth and camera position sequences, and an identifier for the video game, with the video sequence; associating the in-game events and their positions with the video game identifier. Also disclosed: video playback method comprising: obtaining a video sequence, from video game play, and associated video game identifier and sequences of depth values and virtual camera positions; obtaining data indicating a statistically significant in-game event and its position; calculating a location within a video frame corresponding to the event position; augmenting the image frame with a graphical representation of the event. Also disclosed: event analysis method comprising: receiving game identifiers and associated in-game events and their positions from gaming devices ...

Подробнее
15-12-2004 дата публикации

MATERIAL BUCKLING SIMULATION EQUIPMENT INBESONDERE FOR SWITCH BODY FABRIC

Номер: AT0000282862T
Принадлежит:

Подробнее
15-12-2005 дата публикации

SYSTEM AND PROCEDURE FOR THE DYNAMIC ANNOUNCEMENT OF THREE-DIMENSIONAL GRAPHIC DATA

Номер: AT0000313088T
Принадлежит:

Подробнее
22-04-1999 дата публикации

Animation control apparatus

Номер: AU0000704512B2
Принадлежит:

Подробнее
19-01-2012 дата публикации

Animating Speech Of An Avatar Representing A Participant In A Mobile Communications With Background Media

Номер: US20120013620A1
Принадлежит: International Business Machines Corp

Animating speech of an avatar representing a participant in a mobile communication including preparing the avatar for display for display including: selecting images to represent the participant, selecting a generic animation template having a mouth, fitting the images with the generic animation template, and texture wrapping the one or more images representing the participant over the generic animation template; selecting background media; displaying images texture wrapped over the generic animation template with the background media; and animating the images including: receiving an audio speech signal, identifying a series of phonemes, and for each phoneme: identifying a next mouth position, altering the mouth position, texture wrapping a portion of the images corresponding to the altered mouth position, displaying the texture wrapped portion and playing, synchronously with the displayed texture wrapped portion, the portion of the audio speech signal represented by the phoneme.

Подробнее
05-04-2012 дата публикации

Methods and apparatus for rendering applications and widgets on a mobile device interface in a three-dimensional space

Номер: US20120081356A1
Принадлежит: SPB Software Inc

A system represents each of the available applications, including widgets, with a respective image representation on a display associated with the communications device. The system associates each of the image representations with a respective subset of image representations, or panels that are organized to assist a user to locate and interact with the image representations. The system arranges the panels in a three dimensional structure, on the display. The three dimensional structure is rendered as a plurality of joined adjacent panels. The system allows the user to access an available application within the three dimensional structure by manipulating the three dimensional structure three dimensionally where the available application are accessed via the respective panel.

Подробнее
03-05-2012 дата публикации

Image Viewing Application And Method For Orientationally Sensitive Display Devices

Номер: US20120105436A1
Автор: Dorian Averbuch
Принадлежит: SuperDimension Ltd

A system and method for presenting three-dimensional image volume data utilizing an orientationally-sensitive display device whereby the image volume is navigable simply by tilting, raising and lowering the display device. Doing so presents an image on the screen that relates to the angle and position of the display device such that the user gets the impression that the device itself is useable as a window into the image volume, especially when the device is placed on or near the source of the image data, such as a patient.

Подробнее
23-08-2012 дата публикации

System and Method for Using Atomic Agents to Implement Modifications

Номер: US20120214586A1
Принадлежит: Disney Enterprises Inc

Techniques are disclosed for using atomic agents to implement modifications to actors. The atomic agents are self-functioning and may be applied to and removed from an actor in order to modify the behavior and/or appearance of the actor. The default appearance and behavior of the actor is embedded in the program code that defines the actor. One or more atomic agents may be applied to the actor to modify the actor's appearance or behavior without requiring any communication or interaction with the program code that defines the actor. Separate program code defines each atomic agent and the compatibility between the respective atomic agent and other atomic agents.

Подробнее
11-10-2012 дата публикации

Methods and Systems for Representing Complex Animation Using Scripting Capabilities of Rendering Applications

Номер: US20120256928A1
Автор: Alexandru Chiculita
Принадлежит: Adobe Systems Inc

A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames.

Подробнее
27-12-2012 дата публикации

Boundary Handling for Particle-Based Simulation

Номер: US20120330628A1
Принадлежит: Siemens Corp

Boundary handling is performed in particle-based simulation. Slab cut ball processing defines the boundary volumes for interaction with particles in particle-based simulation. The slab cut balls are used for collision detection of a solid object with particles. The solid object may be divided into a plurality of independent slab cut balls for efficient collision detection without a bounding volume hierarchy. The division of the solid object may be handled in repeating binary division operations. Processing speed may be further increased by determining the orientation of each slab cut ball based on the enclosed parts of the boundary rather than testing multiple possible orientations.

Подробнее
03-01-2013 дата публикации

Audio-visual navigation and communication dynamic memory architectures

Номер: US20130007670A1
Автор: Jan Peter Roos
Принадлежит: AQ Media Inc

According to one embodiment, a plurality of spatial publishing objects (SPOs) is provided in a multidimensional space in a user interface. Each of the plurality of spatial publishing objects is associated with digital media data from at least one digital media source. The user interface has a field for the digital media data. A user is provided via the user interface with a user presence that is optionally capable of being represented in the user interface relative to the plurality of spatial publishing objects. The digital media data associated with the at least one spatial publishing object are combined to generate a media output corresponding to the combined digital media data.

Подробнее
21-03-2013 дата публикации

Method and Device for Playing Animation and Method and System for Displaying Animation Background

Номер: US20130069957A1

Embodiments of the present invention provide a method and device for playing an animation, belonging to a communication technology field. The method includes obtaining a first attribute value of an animation object of the current moment when an audio signal is detected, and determining a second attribute value and a first speed value corresponding to the audio signal; taking the first attribute value and second attribute value respectively as a starting point and end point, and playing the animation object according to the first speed value; and stopping, when the audio signal stops, playing the animation object if the playing of the animation object does not end. The device includes an audio starting animation playing module and an audio ending animation playing module. In embodiments, the playing of the animation is achieved through detecting the audio signal and playing the animation object combing the audio signal, which achieves the effect of the animation and enriches the displaying effect. 1. A method for playing an animation , comprising:obtaining a first attribute value of an animation object of the current moment when an audio signal is detected, and determining a second attribute value and a first speed value corresponding to the audio signal;taking the first attribute value and second attribute value respectively as a starting point and end point, and playing the animation object according to the first speed value; andstopping, when the audio signal stops, playing the animation object if the playing of the animation object does not end.2. The method of claim 1 , before detecting the audio signal claim 1 , further comprising:taking a preset third attribute value and a fourth attribute value respectively as the starting point and end point, and playing the animation object in a continuous loop according to a preset second speed value; whereinwhen the audio signal is detected, the method further comprises: stopping the animation object in the continuous ...

Подробнее
25-04-2013 дата публикации

HUMAN BODY AND FACIAL ANIMATION SYSTEMS WITH 3D CAMERA AND METHOD THEREOF

Номер: US20130100140A1
Принадлежит: CYWEE GROUP LIMITED

An animation system integrating face and body tracking for puppet and avatar animation by using a 3D camera is provided. The 3D camera human body and facial animation system includes a 3D camera having an image sensor and a depth sensor with same fixed focal length and image resolution, equal FOV and aligned image center. The system software of the animation system provides on-line tracking and off-line learning functions. An algorithm of object detection for the on-line tracking function includes detecting and assessing a distance of an object; depending upon the distance of the object, the object can be identified as a face, body, or face/hand so as to perform face tracking, body detection, or ‘face and hand gesture’ detection procedures. The animation system can also have zoom lens which includes an image sensor with an adjustable focal length f′ and a depth sensor with a fixed focal length f. 1. A human body and facial animation system with 3D camera , comprising:a 3D camera, comprising an image sensor and a depth sensor; and 'wherein the image sensor and the depth sensor each having a focal length, an image resolution, an field of view (FOV), and an image center; and the system software providing on-line tracking and off-line learning functions.', 'a system software, comprising a user GUI, an animation module and a tracking module;'}2. The human body and facial animation system with 3D camera of claim 1 , wherein the image sensor and the depth sensor both having a same fixed focal length claim 1 , a same image resolution claim 1 , an equal field of view (FOV) and an aligned image center.3. The human body and facial animation system with 3D camera of claim 2 , wherein the system software providing on-line tracking via the user GUI and a command process claim 2 , and tracking and animation integration; and the system software providing off-line learning via building an avatar model claim 2 , and tracking parameters learning.4. The human body and facial animation ...

Подробнее
30-05-2013 дата публикации

METHOD, SYSTEM AND SOFTWARE PROGRAM FOR SHOOTING AND EDITING A FILM COMPRISING AT LEAST ONE IMAGE OF A 3D COMPUTER-GENERATED ANIMATION

Номер: US20130135315A1
Принадлежит:

Method for shooting and editing a film comprising at least one image of a 3D computer-generated animation created by a cinematographic software according to mathematical model of elements that are part of the animation and according to a definition of situations and actions occurring for said elements as a function of time, said method being characterized by comprising the following: computing of alternative suggested viewpoints by the cinematographic software for an image of the 3D computer-generated animation corresponding to a particular time point according to said definition; and instructing for displaying on a display interface, all together, images corresponding to said computed alternative suggested viewpoints of the 3D computer-generated animation at that particular time point. 1. A computer-implemented method for computing and proposing one or more virtual camera viewpoints comprising:computing virtual camera viewpoints of a given set of three-dimensional subjects corresponding to a common time point where said computation is a function of at least one visual composition property of at least one previously recorded virtual camera viewpoint,presenting said computed virtual camera viewpoints,detecting a selection of at least one of said presented virtual camera viewpoints, andrecording of said selected virtual camera viewpoint.2. The method of claim 1 , comprising a step of using said recorded virtual camera viewpoint for a shot of images from said common time point.3. The method of claim 1 , where an image of animation is determined as a function of projecting onto a two-dimensional image claim 1 , the geometric representation of said three-dimensional subject(s) as viewed from a given virtual camera viewpoint.4. The method of claim 1 , where a virtual camera viewpoint is described by any one or more of the following properties: position of a virtual camera relative to a coordinate system claim 1 , the orientation of a virtual camera relative to a ...

Подробнее
06-06-2013 дата публикации

Path and Speed Based Character Control

Номер: US20130141427A1
Принадлежит:

A 3D animation environment that includes an animation object is generated. A movement speed is assigned to object the 3D animation environment. An animation path containing at least first and second waypoints is generated. An animation sequence is generated by identifying a first section of the animation path connected to the first waypoint. A first animation of the animation object is generated in which the animation object moves along the first section of the path at the movement speed. A spatial gap in the animation path is identified between the first and second waypoints. A second animation of the animation object is generated in which the animation object moves, by keyframe animation, from the first waypoint to the second waypoint. A third animation of the animation object is generated in which the animation object moves along at least a second portion of the path at the movement speed. 1. A computer program product embodied in a non-transitory computer-readable storage medium and comprising instructions that when executed by a processor perform a method for animating assets , the method comprising:generating a 3D animation environment that includes at least one animation object;assigning, to the animation object, a movement speed for moving the animation object in the 3D animation environment;generating an animation path in the 3D animation environment, the animation path containing at least first and second waypoints; and identifying a first section of the animation path connected to the first waypoint;', 'responsive to identifying the first section, generating a first animation of the animation object in which the animation object moves along the first section of the path at the movement speed;', 'identifying a spatial gap in the animation path between the first and second waypoints;', 'responsive to identifying the spatial gap, generating a second animation of the animation object in which the animation object moves, by keyframe animation, from the first ...

Подробнее
20-06-2013 дата публикации

System and method for creating motion blur

Номер: US20130155065A1
Автор: Avi I. Bleiweiss
Принадлежит: Advanced Micro Devices Inc

An embedded, programmable motion blur system and method is described. Embodiments include receiving a plurality of vertices in a graphics processing unit (GPU), displacing at least one vertex, receiving a primitive defined by at least one of the displaced vertices, and generating a plurality of primitive samples from the primitive. The receiving of a plurality of vertices, the displacing, the receiving a primitive, and the generating are all performed prior to rendering of the scene. The system includes a central processing unit (CPU), a memory unit coupled to the CPU, and at least one programmable GPU. The GPU includes a vertex shader and a geometry shader programmable to perform geometry amplification and generate a plurality of primitive samples, both of these performed before the scene is rendered.

Подробнее
11-07-2013 дата публикации

AVATAR EYE CONTROL IN A MULTI-USER ANIMATION ENVIRONMENT

Номер: US20130176306A1
Принадлежит:

In a multi-participant modeled virtual reality environment, avatars are modeled beings that include moveable eyes creating the impression of an apparent gaze direction. Control of eye movement may be performed autonomously using software to select and prioritize targets in a visual field. Sequence and duration of apparent gaze may then be controlled using automatically determined priorities. Optionally, user preferences for object characteristics may be factored into determining priority of apparent gaze. Resulting modeled avatars are rendered on client displays to provide more lifelike and interesting avatar depictions with shifting gaze directions. 123.-. (canceled)24. A method for controlling a gaze orientation of an avatar , the method comprising:modeling a digital representation of an avatar including at least one modeled eye and a scene in a computer memory;determining a field of view for the modeled eye of the avatar, wherein the field of view encompasses visual targets in the scene; anddirecting a gaze orientation of the modeled eye to different selected visual targets in the field of view in an automated sequence of changing gaze directions determined at least in part by respective attractiveness values stored in the computer memory for each of the visual targets.25. The method of claim 24 , further comprising determining the respective attractiveness values based at least in part on predefined user preferences for specified characteristics of prospective visual targets.26. The method of claim 24 , further comprising outputting data configured to cause a client computer to display a rendered view of the scene claim 24 , avatar and modeled eye.27. The method of claim 24 , further comprising determining the respective attractiveness values based at least in part on motion of the visual targets within the field of view.28. The method of claim 24 , further comprising determining the respective attractiveness values based at least in part on a level of simulated ...

Подробнее
18-07-2013 дата публикации

IMAGE PROCESSING SYSTEM, IMAGE STORAGE DEVICE, AND MEDICAL IMAGE DIAGNOSTIC APPARATUS

Номер: US20130181978A1
Принадлежит:

An image processing system according to an embodiment includes an image storage device and a playing control device. The image storage device stores four-dimensional data that is a sequential volume data group chronologically acquired and control information for controlling playing of the four-dimensional data. The playing control device acquires the sequential volume data group and the control information from the image storage device and successively plays the sequential volume data group according to the control information. The control information contains identification information that identifies that data is volume data that belongs to the sequential volume data group acquired chronologically and identification information that identifies that volume data that is used as a reference for successive playing from among the sequential volume data group is reference volume data. 1. An image processing system comprising:an image storage device configured to store four-dimensional data and control information for controlling playing of the four-dimensional data, the four-dimensional data being a sequential volume data group that is acquired chronologically; anda playing control device configured to acquire the sequential volume data group and the control information from the image storage device and successively play the sequential volume data group according to the control information,wherein the control information contains identification information that identifies that data is volume data that belongs to the sequential volume data group acquired chronologically and identification information that identifies that volume data that is used as a reference for successive playing from among the sequential volume data group is reference volume data.2. The image processing system according to claim 1 , wherein the control information is described in at least one of a standard tag and a private tag of additional information that is added to the four-dimensional data.3. ...

Подробнее
25-07-2013 дата публикации

METHOD AND SYSTEM FOR INTERACTIVE SIMULATION OF MATERIALS AND MODELS

Номер: US20130187930A1
Автор: Millman Alan
Принадлежит:

A method and system for drawing, displaying, editing animating, simulating and interacting with one or more virtual polygonal, spline, volumetric models, three-dimensional visual models or robotic models. The method and system provide flexible simulation, the ability to combine rigid and flexible simulation on plural portions of a model, rendering of haptic forces and force-feedback to a user. 2. The method of wherein the connecting step includes:physically and rigidly connecting and co-locating the plurality of hardware components with respect to each other and to the network device and the display on the network device at fixed and specific distances and orientations using a rigid harness; andregistering the plurality of hardware components with the application on the network device.3. The method of wherein the connection step includes:connecting dynamically in real-time the plurality of hardware devices with the network device and the display on the network device with the application by dynamically tracking with one or more device trackers using six or more degrees of freedoms, specific distances, orientations and temporal locations of the plurality of hardware devices; andthereby allowing automatic and dynamic co-location and registration and re-registration of the plurality of hardware devices in real-time during a simulation session by the application on the network device.4. The method of wherein the defining step includes drawing claim 1 , editing claim 1 , animating claim 1 , and interacting with one or more non-physically-based polygonal models claim 1 , volumetric models claim 1 , spline models claim 1 , non-uniform rational basis spline (NURBS) models or subdivision surface models in a physically-based claim 1 , realistic manner with haptic feedback via the application on the network device.5. The method of wherein the polygonal models claim 4 , volumetric models claim 4 , or subdivision surface models include physically-based spine claim 4 , hull claim ...

Подробнее
29-08-2013 дата публикации

METHOD OF DISPLAYING READABLE INFORMATION ON A DIGITAL DISPLAY

Номер: US20130222378A1
Автор: Koivusalo Esko
Принадлежит: iCergo Oy

A method, a programme, by which an animated expression of three-dimensional information space is created bring forth a reading architecture on digital screens in which the presented readable information appears three-dimensionally and dynamically to the reader's field of vision. 1. A method for presenting readable information on a digital display , comprising: programmatically and animatedly creating an impression where readable information appears three-dimensionally and dynamically on the digital display , in which method an object comprising a word of readable information to be perceived while reading , moves in-depth such that a last letter , sign or symbol of the object appears in a field of vision of a reader as smallest and as last in a row of letters perceived indepth.2. A method according to claim 1 , comprising adjusting the readable information so that it appears in the field of vision according to a profile of the reader.3. A method according to claim 2 , comprising adjusting the readable information based on an extent claim 2 , preciseness and speed of in-depth perception claim 2 , and on a horizontal and vertical preciseness and speed of perception claim 2 , and on a horizontal and vertical range of the field of vision claim 2 , and also on a speed of eye movements.4. A method according to wherein the readable information comprises fonts claim 1 , letters claim 1 , signs and symbols connected with multi-dimensional reading claim 1 , and wherein the object perceived while reading consists of letters claim 1 , signs and symbols intertwined claim 1 , also in depth claim 1 , inside each other in such a way that the reader perceives them without moving his/her glance.5. A method according to claim 1 , wherein the digital display comprises one or more of mobile phones claim 1 , television sets claim 1 , monitors claim 1 , billboards claim 1 , traffic signs claim 1 , and large roadside signs.6. A method according to claim 1 , comprising controlling the ...

Подробнее
29-08-2013 дата публикации

ANIMATING A MOVING OBJECT IN A THREE-DIMENSIONAL COORDINATE SYSTEM

Номер: US20130222380A1

A method for modeling and animating an object trajectory in three-dimensional (3D) space. The trajectory includes at least one course which represents a 3D model mesh. A course includes at least one segment which is a display unit of the 3D model mesh. A segment includes two 3D points. Multiple vertices are generated for a first 3D point of the segment to specify a plane such that a normal vector of the specified plane is parallel to a vector directed from the first 3D point of the segment to a second 3D point of the segment. The generated vertices are added to the 3D model mesh so that the generated vertices can be subsequently displayed as an extension of the 3D model mesh. 1. A method for animating a moving object in a three-dimensional coordinate system , wherein the coordinate system comprises a set of points and a set of vectors , wherein a first vector of the set of vectors extends from a first vector base point of the set of points to a first vector tip point of the set of points , wherein the moving object comprises an object point of the set of points , wherein an animated motion of the object point is described by a trajectory curve , wherein the trajectory curve comprises a first trajectory point of the set of points and a second trajectory point of the set of points , wherein the first trajectory point is associated with a three-dimensional wireframe mesh that comprises a first vertex point of the set of points , a second vertex point of the set of points , and a third vertex point of the set of points , and wherein the first , second , and third vertex points are noncollinear , the method comprising:a processor of a computer system determining a direction of the animated motion along the trajectory curve between the first trajectory point and the second trajectory point;the processor revising the mesh by performing a sequence of mathematical vector operations upon the first trajectory point and upon the second trajectory point, and wherein the revising ...

Подробнее
05-09-2013 дата публикации

RENDERING APPARATUS AND METHOD

Номер: US20130229410A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A rendering apparatus and method are provided. A plurality of nodes of interface data are described using the plurality of nodes connected hierarchically and indicate a plurality of selectable items that are analyzed, and the interface data is rendered based on a result of the analysis. Consequently, a creator of interface data to be rendered can expect a time-to-market reduction when creating interface data described in a standardized format. 1. A rendering apparatus comprising:an analysis unit to analyze a plurality of nodes of interface data that are described using the plurality of nodes connected hierarchically and indicate a plurality of selectable items; anda rendering unit to render the interface data based on a result of the analysis,wherein the analysis unit analyzes a node corresponding to a latest rendering result and a result of manipulation with respect to the rendering apparatus, andwherein the analysis unit determines a state corresponding to the latest rendering result and a result of manipulation with respect to the rendering apparatus and analyzes the node corresponding to the determined state, andwherein the analysis unit checks if the determined state is a predetermined state and analyzes the node corresponding to the determined state in response to a result of the check.2. The rendering apparatus of claim 1 , further comprising a node update unit incorporating state transition tables (STTs) of at least one of the nodes that is subordinate to a predetermined one of the nodes claim 1 , into the predetermined node.3. The rendering apparatus of claim 1 , wherein a root node from among the plurality of nodes is a bindable node.4. The rendering apparatus of claim 1 , wherein the analysis unit determines a node to be activated among the at least one node that is subordinate to the predetermined node based on bind information of the predetermined node and analyzes the determined node.5. The rendering apparatus of claim 1 , wherein when a root node is ...

Подробнее
12-09-2013 дата публикации

Method of facial image reproduction and related device

Номер: US20130236102A1
Принадлежит: CyberLink Corp

To modify a facial feature region in a video bitstream, the video bitstream is received and a feature region is extracted from the video bitstream. An audio characteristic, such as frequency, rhythm, or tempo is retrieved from an audio bitstream, and the feature region is modified according to the audio characteristic to generate a modified image. The modified image is outputted.

Подробнее
26-09-2013 дата публикации

System and method for generating bilinear spatiotemporal basis models

Номер: US20130249905A1
Принадлежит: Disney Enterprises Inc

Techniques are disclosed for generating a bilinear spatiotemporal basis model. A method includes the steps of predefining a trajectory basis for the bilinear spatiotemporal basis model, receiving three-dimensional spatiotemporal data for a training sequence, estimating a shape basis for the bilinear spatiotemporal basis model using the three-dimensional spatiotemporal data, and computing coefficients for the bilinear spatiotemporal basis model using the trajectory basis and the shape basis.

Подробнее
10-10-2013 дата публикации

USER INTERFACE FOR CONTROLLING THREE-DIMENSIONAL ANIMATION OF AN OBJECT

Номер: US20130265316A1
Принадлежит:

A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. The control area includes an ellipse. The user-manipulable control element includes a three-dimensional arrow with a straight body, a three-dimensional arrow with a curved body, or a sphere. In one embodiment, the interface includes a virtual trackball that is used to manipulate the user-manipulable control element. 1 a control area comprising an ellipse and a slider;', 'a representation of a three-dimensional vector having a variable orientation that specifies the direction, wherein the vector representation is located mostly within the ellipse;', 'a first user-manipulable control element that comprises a virtual trackball, wherein the first user-manipulable control element is located within the ellipse; and', 'a second user-manipulable control element that specifies the speed, wherein the second user-manipulable control element is located along the slider;, 'displaying a user interface comprisingreceiving a first input via the first user-manipulable control element, the first input comprising mousing down within the virtual trackball and continuing to mouse down for a period of time;responsive to receiving the first input, displaying during the period of time a spherical mesh that represents the virtual trackball;receiving a second input via the first user-manipulable control element, the second input comprising spinning the virtual trackball to change the orientation of the vector representation while continuing to mouse down within the virtual trackball;receiving a third input via the first user-manipulable control element, the third input comprising mousing up;responsive to receiving the third input, removing the spherical mesh from display;receiving input via the second user-manipulable control element, the input comprising dragging the second user-manipulable control element to set the speed; andanimating the object based on the ...

Подробнее
07-11-2013 дата публикации

Coordinated 2-Dimensional and 3-Dimensional Graphics Processing

Номер: US20130293537A1
Принадлежит: CISCO TECHNOLOGY INC.

A data processing system () for graphics processing, including a scene manager (). The scene manager () includes a scene loader () for receiving a description of 3 dimensional objects in a scene from a 3-dimensional modelling tool (). The description includes first 2-dimensional frame data. The scene manager () has a texture image modification unit () for receiving second 2-dimensional frame data from a 2-dimensional rendering engine (), and for replacing the first frame data by the second frame data. The scene manager () has an animation scheduler for scheduling and monitoring an animation of the scene. The system includes a display manager () operative to invoke the scene manager () to render output frames in a display buffer (), and a scene Tenderer () configured for applying the 2-dimensional frame data to the 3-dimensional objects to produce textured 3-dimensional objects in the display buffer () and outputting the textured objects in the animation. Related apparatus and methods are also described. 1. A data processing system for graphics processing , comprising:a scene manager, comprising:a scene loader operative to receive a scene description obtained after processing a design in a 3-dimensional authoring tool, the scene description comprising: a description of 3 dimensional objects in the scene, first 2-dimensional frame data for the 3-dimensional objects; and guidance information for replacing the first 2-dimensional frame data;a texture image modification unit, operative to receive second 2-dimensional frame data rendered in a 2-dimensional frame buffer by a 2-dimensional rendering engine, the second 2-dimensional frame data being obtained after processing a design in a 2-dimensional authoring tool, and to replace the first 2-dimensional frame data by the second 2-dimensional frame data by associating coordinates of the second 2-dimensional frame data in the frame buffer to the 3-dimensional objects according to the guidance information; andan animation ...

Подробнее
14-11-2013 дата публикации

IMAGE GENERATING DEVICE, IMAGE GENERATING METHOD, AND INFORMATION STORAGE MEDIUM

Номер: US20130300749A1
Автор: Harada Hidehisa
Принадлежит: SONY COMPUTER ENTERTAINMENT INC.

Provided is an information storage medium having stored thereon a program for causing a computer to execute processing for: acquiring a tentative time interval between a frame for generating an image and a previous frame; acquiring, when a condition associated with action data indicating a posture of an object in accordance with time is satisfied, the posture of the object at a time point at which a time interval shorter than the tentative time interval has elapsed since the previous frame based on the action data; and rendering the image indicating the acquired posture of the object. 1. A computer-readable non-transitory storage medium having stored thereon a program for causing a computer to execute processing for:acquiring a tentative time interval between a frame for generating an image and a previous frame;acquiring, when a condition associated with action data indicating a posture of an object in accordance with time is satisfied, the posture of the object at a time point at which a time interval shorter than the tentative time interval has elapsed since the previous frame based on the action data; andrendering the image indicating the acquired posture of the object.2. The computer-readable non-transitory storage medium having stored thereon a program according to claim 1 , wherein the processing for acquiring the posture of the object comprises acquiring claim 1 , when the tentative time interval comprises a time point associated with the action data claim 1 , the posture of the object at the associated time point.3. The computer-readable non-transitory storage medium having stored thereon a program according to claim 2 , wherein:the action data indicates the posture of the object in accordance with time during a given period; andthe processing for acquiring the posture of the object comprises acquiring, when the tentative time interval comprises a time point of a head of the given period within the action data, the posture of the object at the time point of ...

Подробнее
14-11-2013 дата публикации

System and Method of Streaming 3-D Wireframe Animations

Номер: US20130300824A1
Принадлежит: AT&T Corp, AT&T Intellectual Property II LP

Optimal resilience to errors in packetized streaming 3-D wireframe animation is achieved by partitioning the stream into layers and applying unequal error correction coding to each layer independently to maintain the same overall bitrate. The unequal error protection scheme for each of the layers combined with error concealment at the receiver achieves graceful degradation of streamed animation at higher packet loss rates than approaches that do not account for subjective parameters such as visual smoothness.

Подробнее
05-12-2013 дата публикации

METHOD AND SYSTEM FOR UTILIZING PRE-EXISTING IMAGE LAYERS OF A TWO-DIMENSIONAL IMAGE TO CREATE A STEREOSCOPIC IMAGE

Номер: US20130321408A1
Принадлежит: Disney Enterprises, Inc.

Implementations of the present invention involve methods and systems for converting a 2-D multimedia image to a 3-D multimedia image by utilizing a plurality of layers of the 2-D image. The layers may comprise one or more portions of the 2-D image and may be digitized and stored in a computer-readable database. The layers may be reproduced as a corresponding left eye and right eye version of the layer, including a pixel offset corresponding to a desired 3-D effect for each layer of the image. The combined left eye layers and right eye layers may form the composite right eye and composite left eye images for a single 3-D multimedia image. Further, this process may be applied to each frame of a animated feature film to convert the film from 2-D to 3-D. 1. A method for utilizing stored two dimensional frames to generate a stereoscopic frame comprising:receiving a two dimensional frame comprising one or more layers including a portion of the two dimensional frame;determining a pixel offset for one or more of the layers, the pixel offset corresponding to a perceived depth for the layer in a stereoscopic frame;generating a left eye layer and a right eye layer for the one or more layers; andpositioning the left eye layer relative to the right eye layer for the one or more layers based on the determined pixel offset to perceptually position the left eye layer and the right eye layer in the stereoscopic frame.2. The method of further comprising:combining the one or more generated left eye layers into a composite left eye frame; andcombining the one or more generated right eye layers into a composite right eye frame, wherein the composite left eye frame and the composite right eye frame comprise the stereoscopic frame.3. The method of wherein the positioning operation comprises:shifting one or more pixels of the left eye layer by the determined pixel offset.4. The method of wherein the positioning operation comprises:shifting one or more pixels of the right eye layer by the ...

Подробнее
16-01-2014 дата публикации

System and method for implementation of three dimensional (3D) technologies

Номер: US20140015832A1
Принадлежит: Individual

A system and method of the present invention is a video reconstruction provided to reconstruct animated three dimensional scenes from number of videos received from cameras which observe the scene from different positions and different angles. Multiple video footages of objects are taken from different positions. Then the video footages are filtered to avoid noise (results of light reflection and shadows. The system then restores 3D model of the object and the texture of the object. The system also restores positions of dynamic cameras. Finally, the system maps texture to 3D model.

Подробнее
06-02-2014 дата публикации

TECHNIQUES FOR SMOOTHING SCRIPTED STEREO CURVES FOR STEREOSCOPIC COMPUTER ANIMATION

Номер: US20140035903A1
Принадлежит: DreamWorks Animation LLC

A computer-implemented method for smoothing a stereo parameter for a computer-animated film sequence. A timeline for the film sequence is obtained, the timeline comprising a plurality of time entries. A stereo parameter distribution is obtained, wherein the stereo parameter distribution comprises one stereo parameter value for at least two time entries of the plurality of time entries, and wherein the stereo parameter value corresponds a stereo setting associated with a pair of stereoscopic cameras configured to produce a stereoscopic image of the computer-animated film sequence. Depending on a statistical measurement of the stereo parameter distribution, either a static scene parameter is calculated, or a set of smoothed parameter values is calculated. 1. A computer-implemented method for smoothing a stereo parameter for a computer-animated film sequence , the method comprising:obtaining a timeline for the film sequence, the timeline comprising a plurality of time entries;obtaining a stereo parameter distribution, wherein the stereo parameter distribution comprises one stereo parameter value for at least two time entries of the plurality of time entries, and wherein the stereo parameter value corresponds to a stereo setting associated with a pair of stereoscopic cameras configured to produce a stereoscopic image of the computer-animated film sequence;calculating a statistical measurement of the stereo parameter distribution; calculating a static scene parameter based on the stereo parameter values of the parameter distribution, and', 'storing, in a computer memory, the static scene parameter as the stereo parameter value for each of the at least two time entries; and, 'if the statistical measurement is less than a threshold value calculating a set of smoothed parameter values based on the parameter distribution and a smoothing function, and', 'storing, in the computer memory, the set of smoothed parameter values as the stereo parameter value for each of the at ...

Подробнее
06-02-2014 дата публикации

CONSTRAINT EVALUATION IN DIRECTED ACYCLIC GRAPHS

Номер: US20140035908A1
Принадлежит: DreamWorks Animation LLC

Systems and processes are described below relating to evaluating a dependency graph to render three-dimensional (3D) graphics using constraints. Two virtual 3D objects are accessed in a virtual 3D space. A constraint relationship request is received, which identifies the first object as a parent and the second object as a child. The technique verifies whether the graphs of the objects are compatible for being constrained to one another. The first object is evaluated to determine its translation, rotation, and scale. The second object is similarly evaluated based on the translation, rotation, and scale of the first object. An image is rendered depicting at least a portion of the first virtual 3D object and at least a portion of the second virtual 3D object. 1. A method for evaluating a constraint system for use in rendering three-dimensional (3D) graphics , the method comprising:accessing a first directed acyclic graph (DAG) representing a first virtual 3D object;accessing a second DAG representing a second virtual 3D object;receiving a constraint relationship request, the constraint relationship request identifying the first DAG as a constraint parent and the second DAG as a constraint child;verifying whether the first DAG is compatible for constraining to the second DAG;connecting, in response to the first DAG and second DAG being compatible for constraining, an output of the first DAG to an input of the second DAG;evaluating the first DAG to determine a first output value at the output of the first DAG;evaluating, in response to determining that the first output value affects the evaluation of the second DAG, the second DAG based on the first output value received at the input of the second DAG from the output of the first DAG; andrendering, after determining that the evaluation of the first DAG and second DAG is complete, an image depicting at least a portion of the first virtual 3D object and at least a portion of the second virtual 3D object.2. The method of ...

Подробнее
06-02-2014 дата публикации

Techniques for producing baseline stereo parameters for stereoscopic computer animation

Номер: US20140035918A1
Принадлежит: DreamWorks Animation LLC

Bounded-parallax constraints are determined for the placement of a pair of stereoscopic cameras within a computer-generated scene. A minimum scene depth is calculated based on the distance from the pair of cameras to a nearest point of interest in the computer-generated scene. A near-parallax value is also calculated based on the focal length and the minimum scene depth. Calculating the near-parallax value includes selecting a baseline stereo-setting entry from a set of stereo-setting entries, each stereo-setting entry of the set of baseline stereo-setting entries includes a recommended scene depth, a recommended focal length, and a recommended near-parallax value. For the selected baseline stereo-setting entry: the recommended scene depth corresponds to the minimum scene depth, and the recommended focal length corresponds to the focal length. The near-parallax value and far-parallax value are stored as the bounded-parallax constraints for the placement of the pair of stereoscopic cameras.

Подробнее
06-02-2014 дата публикации

OBJECT DISPLAY DEVICE

Номер: US20140035933A1
Автор: SAOTOME Hiroaki
Принадлежит:

A polygon of a skirt is beforehand set in association with a skirt bone such that the polygon is at an angle of β to the skirt bone. The movement of the skirt bone is controlled according to the movement of a thigh bone. When a character walks or runs, if the thigh bone is inclined at an angle of α, each of the skirt bone and the polygon is inclined at an angle of α in a direction perpendicular to the ground surface. Therefore, since a polygon and the polygon do not interest each other, it is possible to prevent a thigh portion of the character from penetrating the skirt unnaturally. 1. An object display device which displays a first object and a second object and in which positions of polygons are set in association with each of a position of a first skeleton of the first object and a position of a second skeleton of the second object , the object display device comprising:a first skeleton activator that activates the first skeleton;a second skeleton activator that activates the second skeleton according to an activation of the first skeleton activated by the first skeleton activator;a position calculator that calculates a position of the polygon based on the position of the first skeleton and the position of the second skeleton; andan object displayer that displays the first object and the second object on a display screen by drawing the polygon at the position calculated by the position calculator.2. The object display device according to claim 1 , whereinthe first skeleton and the second skeleton are set to be substantially parallel to each other,a position of a first polygon associated with the position of the first skeleton is set to be at a position that is substantially parallel to the first skeleton, anda position of a second polygon associated with the position of the second skeleton is set to be at a position that is at a predetermined angle to the second skeleton.3. The object display device according to claim 1 , further comprising: an activation method ...

Подробнее
13-03-2014 дата публикации

IMAGE ENHANCEMENT APPARATUS

Номер: US20140071137A1
Принадлежит: Nokia Corporation

A method comprising: analysing at least two images to determine at least one object mutual to the at least two images, the object having a periodicity of motion; generating an animated image based on the at least two images, wherein the at least one object is animated; determining at least one audio signal associated with the at least one object; and combining the at least one audio signal with the animated image to generate an audio enabled animated image. 1. An apparatus comprising at least one processor and at least one memory including computer code for one or more programs , the at least one memory and the computer code configured to with the at least one processor cause the apparatus at least to:analyse at least two images to determine at least one object mutual to the at least two images, the object having a periodicity of motion;generate an animated image based on the at least two images, wherein the at least one object is animated;determine at least one audio signal associated with the at least one object; andcombine the at least one audio signal with the animated image to generate an audio enabled animated image.2. The apparatus as claimed in claim 1 , wherein determining the at least one audio signal associated with the at least one object causes the apparatus to:receive the at least one audio signal; andfilter the at least one audio signal.3. The apparatus as claimed in claim 2 , wherein receiving the at least one audio signal causes the apparatus to:receive at least a portion of the at least one audio signal from at least one microphone at substantially the same time as the captured at least two images.4. The apparatus as claimed in claim 2 , wherein filtering the at least one audio signal causes the apparatus to:determine at least one foreground sound source of the at least one audio signal; andfilter the at least one audio signal to remove the at least one foreground sound source from the at least one audio signal to generate an ambience audio signal ...

Подробнее
20-03-2014 дата публикации

IMAGE ENHANCEMENT APPARATUS AND METHOD

Номер: US20140078398A1
Принадлежит: Nokia Corporation

A method comprising: generating at least two frames from a video, wherein the at least two frames are configured to provide an animated image; determining at least one object based on the at least two frames, the at least one object having a periodicity of motion with respect to the at least two frames; determining at least one audio signal component for associating with the animated image based on a signal characteristic of at least one audio signal; and combining the at least one object and the at least one audio signal component wherein the animated image is substantially synchronised with the at least one signal component based on the signal characteristic. 1. An apparatus comprising at least one processor and at least one memory including computer code for one or more programs , the at least one memory and the computer code configured to with the at least one processor cause the apparatus to at least:generate at least two frames, wherein the at least two frames are configured to provide an animated image;determine at least one object based on the at least two frames, the at least one object having a motion in the animated image;determine at least one audio signal for associating with the animated image based on a signal characteristic of the at least one audio signal; andcombine the animated image and the at least one audio signal wherein the animated image is substantially synchronised with the at least one audio signal.2. The apparatus as claimed in claim 1 , wherein determining the at least one audio signal for associating with the animated image causes the apparatus at least to:analyse the at least one audio signal to determine the signal characteristic;determine at least one audio clip from the at least one audio signal based on the signal characteristic;analyse the at least one audio clip to determine energy distributions between two successive beat instants of the at least one audio clip; andselect at least one of the at least one audio clip based on the ...

Подробнее
27-03-2014 дата публикации

SEAMLESS FRACTURE IN A PRODUCTION PIPELINE

Номер: US20140085312A1
Принадлежит: DreamWorks Animation LLC

Systems and processes for rendering fractures in an object are provided. In one example, a surface representation of an object may be converted into a volumetric representation of the object. The volumetric representation of the object may be divided into volumetric representations of two or more fragments. The volumetric representations of the two or more fragments may be converted into surface representations of the two or more fragments. Additional information associated with attributes of adjacent fragments may be used to convert the volumetric representations of the two or more fragments into surface representations of the two or more fragments. The surface representations of the two or more fragments may be displayed. 18-. (canceled)9. A computer-enabled method for rendering a first fragment of a reconstructed object , the reconstructed object comprising a plurality of fragments , wherein the reconstructed object is for animating an object to be fractured into the plurality of fragments , the method comprising:evaluating at least one characteristic of a surface representation of the first fragment using data associated with a second fragment of the plurality of fragments; andcausing a display of surface representation of the first fragment using the evaluated at least one characteristic.10. The computer-enabled method of claim 9 , wherein the display does not include a display of the data associated with the second fragment.11. The computer-enabled method of claim 9 , wherein:the surface representation of the first fragment comprises a plurality of surface tiles of the first fragment and a value associated with each surface tile of the plurality of surface tiles of the first fragment; andthe data associated with the second fragment comprises a plurality of surface tiles of the second fragment and a value associated with each surface tile of the plurality of surface tiles of the second fragment.12. The computer-enabled method of claim 11 , wherein evaluating at ...

Подробнее
10-04-2014 дата публикации

Transitioning Between Top-Down Maps and Local Navigation of Reconstructed 3-D Scenes

Номер: US20140098107A1
Принадлежит: MICROSOFT CORPORATION

Technologies are described herein for transitioning between a top-down map display of a reconstructed structure within a 3-D scene and an associated local-navigation display. An application transitions between the top-down map display and the local-navigation display by animating a view in a display window over a period of time while interpolating camera parameters from values representing a starting camera view to values representing an ending camera view. In one embodiment, the starting camera view is the top-down map display view and the ending camera view is the camera view associated with a target photograph. In another embodiment, the starting camera view is the camera view associated with a currently-viewed photograph in the local-navigation display and the ending camera view is the top-down map display. 1. A computer-readable storage medium having computer-executable instructions stored thereupon which , when executed by one or more computers , cause the one or more computers to: locating recognizable features that appear in at least two or more photographs in the collection of photographs, and', 'calculating positions of the recognizable features in space using one of a location, a perspective or a visibility or obscurity of the recognizable features in the collection of photographs;, 'generate a 3-D point cloud representing one of a structure or a scene computed from a collection of photographs by'}display a top-down map display by projecting the positions of the recognizable features from the 3-D point cloud as points into an x-y plane;receive user input of a selected point in the x-y plane of the top-down map display; andprovide an animated transition from the top-down map display to a local-navigation display showing a photograph from the collection of photographs, the photograph corresponding to the selected point displayed in the x-y plane of the top-down display.2. The computer-readable storage medium of claim 1 , wherein the animated transition is ...

Подробнее
07-01-2021 дата публикации

Remotely Controlling A System Using Video

Номер: US20210000557A1
Принадлежит:

Systems and methods for remotely controlling a system using video are provided. A method in accordance the present disclosure includes detecting a video signal of an auxiliary system at a video input, wherein the video signal including images encoded with control information. The method also includes determining that the images included in the video signal include the control information. The method further includes extracting the control information from the images. Additionally, the method includes modifying operations of the system based on the control information. 1. A system comprising:a video input;a processor; anda computer-readable data storage device storing program instructions that, when executed by the processor, control the system to:detect a video signal of an auxiliary system at the video input, the video signal including one or more images encoded with control information for the system;determine that one or more images included in the video signal include the control information;extract the control information from the one or more images; andmodify one or more operations of the system based on the control information.2. The system of claim 1 , wherein the control information controls a behavior of the system.3. The system of claim 1 , wherein the control information modifies a user interface of the system.4. The system of claim 1 , wherein the video input comprises a unidirectional video input configured to solely receive video signals or audiovisual signals.5. The system of claim 1 , wherein the control information is solely transmitted within the one or more images.6. The system of claim 1 , wherein the control information is embedded in the one or more images using steganography.7. The system of claim 1 , wherein the program instructions further control the system to modify a predetermined function of the system in response to detecting the video signal.8. The system of claim 7 , wherein modifying the predetermined function comprises disabling ...

Подробнее
07-01-2016 дата публикации

AUDIO MEDIA MOOD VISUALIZATION

Номер: US20160004500A1
Принадлежит:

An audio media visualization method and system. The method includes receiving by a computing processor, mood description data describing different human emotions/moods. The computer processor an audio file comprising audio data and generates a mood descriptor file comprising portions of the audio data associated with specified descriptions of the mood description data. The computer processor receives a mood tag library file comprising mood tags mapped to animated and/or still objects representing various emotions/moods and associates each animated and/or still object with an associated description. The computer processor synchronizes the animated and/or still objects with the portions of said audio data and presents the animated and/or still objects synchronized with the portions of said audio data. 1. A method comprising:generating, by a computer processor of a computing apparatus, a mood descriptor file comprising portions of an audio file comprising audio data presented by an author, wherein said audio data is associated with specified descriptions of mood description data describing different human emotions/moods;receiving, by said computer processor, a mood tag library file comprising mood tags describing and mapped to mood based annotations comprising animated video images representing various emotions/moods;associating, by said computer processor based on said mood tags, each animated video image of said animated video images with an associated description of said specified descriptions;synchronizing, by said computer processor based on results of said associating, said animated video images with said portions of said audio data associated with said specified descriptions; first presenting, by said computer processor to a listener, said animated video images synchronized with said portions of said audio data associated with said specified descriptions;second presenting, by said computer processor to said listener at various intervals during said first ...

Подробнее
07-01-2016 дата публикации

Method and system for generating motion sequence of animation, and computer-readable recording medium

Номер: US20160005207A1
Автор: Jae Woong Jeon
Принадлежит: Anipen Inc

One aspect of the present invention provides a method for generating a motion sequence of an animation, the method comprising the steps of: generating a line of movement indicating a path along which a character moves, with reference to a first user manipulation inputted with respect to a reference plane; specifying the line of movement, a section included in the line of movement and a point on the line of movement with reference to a second user manipulation inputted with respect to the reference plane; and generating a motion sequence which enables the character to carry out assigned motions by assigning a motion to the line of movement, the section or the point with reference to a third user manipulation inputted with respect to the reference plane when the character is located at the line of movement, the section or the point to which the motion is assigned.

Подробнее
07-01-2021 дата публикации

LIVE CUBE PREVIEW ANIMATION

Номер: US20210005002A1
Принадлежит:

Rendering potential collisions between virtual objects and physical objects if animations are implemented. A method includes receiving user input selecting a virtual object to be animated. The method further includes receiving user input selecting an animation path for the virtual object. The method further includes receiving user input placing the virtual object to be animated and the animation path in an environment including physical objects. The method further includes, prior to animating the virtual object, displaying the virtual object and the animation path in a fashion that shows the interaction of the virtual object with one or more physical objects in the environment. 1. A computer system comprising:one or more processors; and receiving user input selecting a virtual object to be animated;', 'receiving user input selecting an animation path for the virtual object;', 'receiving user input placing the virtual object to be animated and the animation path in an environment including physical objects; and', 'prior to animating the virtual object, displaying the virtual object and the animation path in a fashion that shows the interaction of the virtual object with one or more physical objects in the environment., 'one or more computer-readable media having stored thereon instructions that are executable by the one or more processors to configure the computer system to render potential collisions between virtual objects and physical objects if animations are implemented, including instructions that are executable to configure the computer system to perform at least the following2. The computer system of claim 1 , wherein the one or more computer-readable media further have stored thereon instructions that are executable by the one or more processors to configure the computer system to perform at least the following:detecting a potential collision between the virtual object and a physical object if the animation were performed; andas a result, highlighting the ...

Подробнее
07-01-2021 дата публикации

CREATION OF A SIMULATION SCENE FROM A SPECIFIED VIEW POINT

Номер: US20210005026A1
Автор: LESBORDES Rémi
Принадлежит:

A creation of simulation scenes from a specified view point is provided. The method consists in obtaining digital photographic images of the scene from the specified view point, detecting objects in the digital photographic images, extracting masks of the object, and associating a distance to the digital photographic image, and a lower distance to the object. The scene thus created provides a photorealistic scene wherein 3D objects can be inserted. According to the distances of the 3D objects, they can be displayed behind or beyond the masks, but always behind the digital photographic images that defines the background of the scene. 1. A method comprising:obtaining at least one digital photographic image of a view of a 3D real space;obtaining a position and an orientation of the digital photographic image relative to a specified view point;extracting from the at least one digital photographic image at least one mask representing at least one object having a specified position in the 3D real space in the at least one digital photographic image;associating to the mask an object distance between the object and the specified view point;associating to the digital photographic image a distance higher than the object distance;creating a digital simulation scene comprising the digital photographic image and the mask.2. The method of claim 1 , wherein obtaining the orientation of the digital photographic image relative to the specified view point comprises:retrieving at least geographical coordinates of at least one fixed element from a database;detecting at least one position of the fixed element of said set in the digital photographic image; geographical coordinates of said fixed element;', 'geographical coordinates of the specified view point;', 'said position of said fixed element in the digital photographic image., 'detecting an orientation of the digital photographic image according to3. The method of claim 1 , wherein obtaining the orientation of the digital ...

Подробнее
04-01-2018 дата публикации

Method and apparatus for generating graphic images

Номер: US20180005428A1
Принадлежит: Individual

Methods and apparatuses to generate various graphic features, such as fur and hair by modeling features using non-linear contours and positioning a number of intermediate shells to achieve a realistic appearance. These enhancements result in reduced processing time. The intermediate shells may be generated by interpolation of the base and final shells. These processes may be used to build a variety of features, and are particularly suited for grass, hair, fur and so forth.

Подробнее
02-01-2020 дата публикации

SYSTEMS AND METHODS FOR GENERATING DYNAMIC REAL-TIME HIGH-QUALITY LIGHTING FOR DIGITAL ANIMATION

Номер: US20200005525A1
Принадлежит:

Systems, methods, and non-transitory computer-readable media can receive a first set of static lighting information associated with a first static lighting setup and a second set of static lighting information associated with a second static lighting setup. The first set of static lighting information and the second set of static lighting information are associated with a scene to be rendered. A first set of global illumination information is precomputed based on the first set of static lighting information. A second set of global illumination information is precomputed based on the second set of static lighting information. The first and second sets of global illumination information are blended to derive a blended set of global illumination information. The scene is rendered in a real-time application based on the blended set of global illumination information. 1. A computer-implemented method comprising:receiving, by a computing system, a first set of static lighting information associated with a first static lighting setup and a second set of static lighting information associated with a second static lighting setup, wherein the first set of static lighting information and the second set of static lighting information are associated with a scene to be rendered;precomputing, by the computing system, a first set of global illumination information based on the first set of static lighting information;precomputing, by the computing system, a second set of global illumination information based on the second set of static lighting information;blending, by the computing system, the first set of global illumination information and the second set of global illumination information to derive a blended set of global illumination information; andrendering, by the computing system, the scene based on the blended set of global illumination information.2. The computer-implemented method of claim 1 , whereinthe first set of global illumination information comprises a first set ...

Подробнее
03-01-2019 дата публикации

EXTENDED REALITY CONTROLLER AND VISUALIZER

Номер: US20190005733A1
Принадлежит:

A method comprises: capturing images of a movable object in a scene and tracking movement of the object in the scene based on the images, to produce movement parameters that define the movement; generating for display an extended reality (XR) visualization of the physical object in the scene and changing the XR visualization responsive to changing ones of the movement parameters, such that the XR visualization visually reflects the tracked movement; displaying the XR visualization; and converting the movement parameters to control messages configured to control one or more of sound and light, and transmitting the control messages. 1. A method comprising:capturing images of a movable object in a scene and tracking movement of the object in the scene based on the images, to produce movement parameters that define the movement;generating for display an extended reality (XR) visualization of the physical object in the scene and changing the XR visualization responsive to changing ones of the movement parameters, such that the XR visualization visually reflects the tracked movement;displaying the XR visualization; andconverting the movement parameters to control messages configured to control one or more of sound and light, and transmitting the control messages.2. The method of wherein:the generating the XR visualization and the changing the XR visualization includes generating an animated overlay representative of the object and changing visual features of the animated overlay responsive to the changing ones of the movement parameters.3. The method of claim 2 , wherein:the changing the visual features includes changing between different sizes, shapes, or colors of the animated overlay responsive to the changing ones of the movement parameters.4. The method of claim 1 , wherein:the converting includes converting the movement parameters to sound control messages configured to control sound; andthe transmitting includes transmitting the sound control messages to a sound ...

Подробнее
03-01-2019 дата публикации

Method and Apparatus for Generating Dynamic Real-Time 3D Environment Projections

Номер: US20190007672A1
Автор: Bobby Gene Burrough
Принадлежит: Individual

Methods and apparatus for generating dynamic real-time environment projections. In an exemplary embodiment, a method for generating a dynamic real-time 3D environment projection includes acquiring a real-time 2D image of an environment, and projecting the real-time 2D image of the environment onto a 3D shape to generate a 3D environment projection. In an exemplary embodiment, an apparatus that generates a dynamic real-time 3D environment projection includes an image receiver that acquires a real-time 2D image of an environment, and a projector that projects the real-time 2D image of the environment onto a 3D shape to generate a 3D environment projection.

Подробнее
14-01-2021 дата публикации

INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD

Номер: US20210008452A1
Принадлежит:

An information processing system for controlling movements of a character () in a virtual three-dimensional space, comprising movement control unit () that controls the movements of the character (), and a switching determination unit () that determines switching of the movement of the character () by the movement control unit () between a three-dimensional movement in the virtual three-dimensional space and a movement in a predetermined surface () provided in the virtual three-dimensional space. The movement control unit () determines a speed of the character () after the switching on the basis of a speed of the character () before the switching when the switching is performed between the three-dimensional movement in the virtual three-dimensional space and the movement in the predetermined surface (). 1. A non-transitory computer-readable storage medium having stored therein an information processing program for controlling movements of an object in a virtual three-dimensional space , the information processing program , when executed by at least one processor , causes the at least one processor to provide execution comprising:controlling the movements of the object;determining switching of the movement of the object between a three-dimensional movement in the virtual three-dimensional space and a movement along a plain surface provided in the virtual three-dimensional space; anddetermining a speed of the object after the switching on the basis of a speed of the object before the switching when the switching between the three-dimensional movement in the virtual three-dimensional space and the movement along the plain surface is performed, whereinwhen the object switches from the movement along the plain surface to the three-dimensional movement, the speed component along the plain surface before the switching is used as the speed component with respect to the plain surface of the three-dimensional movement after the switching.2. The non-transitory computer- ...

Подробнее
14-01-2021 дата публикации

Virtual Puppeteering Using a Portable Device

Номер: US20210008461A1

A virtual puppeteering system includes a portable device including a camera, a display, a hardware processor, and a system memory storing an object animation software code. The hardware processor is configured to execute the object animation software code to, using the camera, generate an image in response to receiving an activation input, using the display, display the image, and receive a selection input selecting an object shown in the image. The hardware processor is further configured to execute the object animation software code to determine a distance separating the selected object from the portable device, receive an animation input, identify, based on the selected object and the received animation input, a movement for animating the selected object, generate an animation of the selected object using the determined distance and the identified movement, and render the animation of the selected object.

Подробнее
08-01-2015 дата публикации

Virtual golf simulation apparatus and method for supporting generation of virtual green

Номер: US20150011279A1
Принадлежит: Golfzon Co Ltd

Disclosed herein are a virtual golf simulation apparatus and method. The virtual golf simulation apparatus includes an image processing unit, a manipulation unit, and a green setting unit. The image processing unit provides an image of a basic set green on which a user will perform putting. The manipulation unit provides an interface that enables the user to set the lie of the basic set green. The green setting unit generates a user-set green by processing received setting information via the manipulation unit. The image processing unit provides the image of the user-set green.

Подробнее
27-01-2022 дата публикации

METHOD FOR FORMING WALLS TO ALIGN 3D OBJECTS IN 2D ENVIRONMENT

Номер: US20220027524A1
Автор: Jovanovic Milos
Принадлежит:

Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2) environment. The method may include capturing the 2D environment and adding scale and perspective to the 2D environment. Further, a user may select intersection points on a ground plane of the 2D environment to form walls, thereby converting the 2D environment into a 3D space. The user may further add 3D models of objects on the wall plane such that the objects may remain flush with the wall plane. 1. A method for visualizing a three-dimensional model of an object in a two-dimensional environment , the method comprising:receiving, with a processor via a user interface, from a user, a ground plane input comprising a plurality of ground plane points selected by the user to define a ground plane corresponding to a horizontal plane of the two-dimensional environment;automatically generating, with the processor, and displaying, via a display unit, a three-dimensional environment for the two-dimensional environment based on the ground plane input;automatically generating, with the processor, and displaying, via the display unit, a wall plane, representing a vertical plane of the two-dimensional environment orthogonal to the horizontal plane, in the three-dimensional environment positioned at at least two wall-floor intersection points selected by the user; andsuperimposing, with the processor, and displaying, via the display unit, the three-dimensional model of the object on the three-dimensional environment for the two-dimensional environment based on the ground plane input and the wall-floor intersection points.2. The method of claim 1 , further comprising:receiving, with the processor via the user interface, from the user, input comprising a selection of a wall-hidden surface intersection point on the two-dimensional environment, the wall-hidden surface intersection point indicating a second plane behind the wall plane;automatically generating, ...

Подробнее
12-01-2017 дата публикации

SYSTEM AND METHOD OF STREAMING 3-D WIREFRAME ANIMATIONS

Номер: US20170011533A1
Принадлежит:

Optimal resilience to errors in packetized streaming 3-D wireframe animation is achieved by partitioning the stream into layers and applying unequal error correction coding to each layer independently to maintain the same overall bitrate. The unequal error protection scheme for each of the layers combined with error concealment at the receiver achieves graceful degradation of streamed animation at higher packet loss rates than approaches that do not account for subjective parameters such as visual smoothness. 1. A method comprising:partitioning a three-dimensional wireframe mesh corresponding to a video scene according to (1) objects within the three-dimensional wireframe mesh and (2) motion of the objects, to yield partitions;computing a visual smoothness value for each partition in the partitions;organizing the partitions into layers based on the visual smoothness value for the each partition; andapplying unequal error protection to the layers, wherein the unequal error protection applied to each layer is based on a bitrate value for each respective layer.2. The method of claim 1 , wherein each layer of the layers comprises one of a node and a group of nodes within the three-dimensional wireframe mesh.3. The method of claim 1 , further comprising encoding a particular layer in the layers to be resilient to packet errors.4. The method of claim 1 , wherein a number of nodes within a portion of the layers is associated with an output bit rate of the portion.5. The method of claim 1 , further comprising producing a three-dimensional packetized streaming signal representative of a scene comprising animation associated with the layers.6. The method of claim 1 , further comprising partitioning the layers according to a visual importance of each respective layer in the layers.7. The method of claim 1 , wherein the unequal error protection comprises optimizing a distribution of a bit budget allocation amongst the layers.8. A system comprising:a processor; and partitioning ...

Подробнее
12-01-2017 дата публикации

METHOD FOR CREATING ANIMATED ADVERTISEMENTS USING PARALLAX SCROLLING

Номер: US20170011541A1
Автор: NAOR Shahar
Принадлежит:

A method of creating animated content, which includes the steps of generating an element to be displayed on a mobile device, touch screen or desktop screen. The element includes details of at least one graphical resource together with the settings associated with the graphical resource. The method further includes the steps of adding scrolling and parallax-animation functionality to the generated element and generating computer code to create a parallax-animated display of the created content on the device. 1. A method of creating animated content , the method comprising the steps of:generating an element to be displayed on a device; said element comprising details of at least one graphical resource together with the settings associated with said least one graphical resource;adding scrolling and parallax animation functionality to said generated element;generating computer code thereby to create a parallax animated display of the created content on said device.2. The method of claim 1 , further comprising the step of:tracking and analyzing said parallax animated display.3. The method of claim 1 , wherein said the step of generating comprises the step of:translating said at least one graphical resource together with the settings associated with said least one graphical resource into a readable JavaScript® file.4. The method of claim 1 , wherein said device comprises one of group of devices including mobile devices claim 1 , touch screens and desktop computer screens.5. The method of claim 1 , wherein said content comprises one of group including HTML content recommendations claim 1 , advertisement claim 1 , user guides and publications.6. The method of claim 1 , wherein said at least one graphical resource is generated automatically or manually.7. The method of claim 1 , wherein said at least one graphical resource is configured to move at different speeds in relation to the content surrounding said at least one graphical resource within said element inside an HTML ...

Подробнее
11-01-2018 дата публикации

Audio-Visual Navigation and Communication

Номер: US20180011621A1
Автор: Roos Jan Peter
Принадлежит:

Communicating information through a user platform by representing, on a user platform visual display, spatial publishing objects as entities at locations within a three-dimensional spatial publishing object space. Each spatial publishing object associated with information, and each presenting a subset of the associated information. Establishing a user presence at a location within the spatial publishing object space. The user presence, in conjunction with a user point-of-view, being navigable by the user in at least a two-dimensional sub-space of the spatial publishing object space. 1. A computer-implemented method of communicating comprising the steps of:providing, by a user interface software module stored on a client computing device and in communication with an application software module stored on a server computing device, a three-dimensional user interface on a display of the client computing device, the three-dimensional user interface comprising a plurality of spatial publishing objects retrieved in response to a query and based on properties and permission settings of the plurality of spatial publishing objects, the plurality of spatial publishing objects stored in one or more remote or local databases or collected through permission-enabled and tagged access via a distributed network, and associated with one or more spatial publishing object communication spaces;receiving, at the server computing device from the client computing device, a selection to associate a first spatial publishing object from the plurality of spatial publishing objects with a second spatial publishing object communication space associated with a second spatial publishing object;saving, at the server computing device, a current state of the first and second spatial publishing objects and the associated communication spaces;updating a content of the second spatial publishing object with an input comprising one or more of navigation input, audio input, video input, mixed media input, ...

Подробнее
11-01-2018 дата публикации

DATA STRUCTURE FOR COMPUTER GRAPHICS, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM

Номер: US20180012389A1
Принадлежит: DENTSU INC.

The present invention is designed to allow easy synchronization of the movement of a computer graphics (CG) model with sound data. The data structure according to an embodiment of the present invention presents a data structure that relates to a computer graphics (CG) model, including first time-series information for designating the coordinates of the components of the CG model on a per beat basis, and the first time-series information is used on a computer to process the CG model. 1. A non-transitory computer-readable storage medium configured to store a data structure related to a computer graphics (CG) model , wherein:the data structure includes first time-series information for specifying coordinates of components of the CG model, in chronological order, on a per beat basis;the data structure conforms to SMF (Standard MIDI File) format; andthe first time-series information is used on a computer to process the CG model.2. (canceled)3. The storage medium according to claim 1 , wherein a channel of the SMF format corresponds to one or a plurality of pieces of the first time-series information.4. The storage medium according to claim 1 , wherein a note number and a velocity of a note-on signal in the SMF format are used on the computer as information about the coordinates of the components of the CG model.5. The storage medium according to claim 1 , wherein a delta time in the SMF format is used on the computer as information about a duration of a beat where coordinates of a component of the CG model change.6. The storage medium according to claim 1 , wherein the CG model is constituted by a skeleton claim 1 , which is comprised of two joint points and bones connecting between these.7. The storage medium according to claim 6 , wherein the first time-series information includes information about a relative angle of a bone of the CG model.8. The storage medium according to claim 6 , wherein the first time-series information includes information about a relative ...

Подробнее
11-01-2018 дата публикации

ELECTRONIC DEVICE, STORAGE MEDIUM, PROGRAM, AND DISPLAYING METHOD

Номер: US20180012393A1
Автор: HOSOYA Kunio
Принадлежит:

An electronic device is provided which displays an object (body) on a flexible display screen in accordance with a three-dimensional shape of the display screen by utilizing the flexibility of the display screen. An electronic device including a display portion which includes a flexible display device displaying an object on a display screen; a detection portion detecting positional data of a given part of the display screen; and an arithmetic portion calculating a three-dimensional shape of the display screen on the basis of the positional data and computing motion of the object to make the object move according to a given law in accordance with the calculated three-dimensional shape of the display screen. 1. (canceled)2. A displaying method of an electronic device comprising the steps of:displaying an object on a display screen of the electronic device, the display screen having flexibility;obtaining positional data of the object on the display screen and a field on which the object is configured to move;calculating a three-dimensional shape of the display screen based on the positional data;checking whether or not there is touch input on the object; anddetermining move of the object on the field based on the touch input and the calculation result.3. The displaying method according to claim 2 , further comprising a step of comparing the calculated three-dimensional shape with an initial state.4. The displaying method according to claim 2 , wherein the display screen is curved to have a steep step and the object is displayed to fall down and roll due to the steep step.5. The displaying method according to claim 2 , further comprising a step of simulating the move of the object.6. The displaying method according to claim 2 , wherein the electronic device comprises a transistor comprising an oxide semiconductor.7. The displaying method according to claim 2 , further comprising a step of generating sound to alert when a degree of curve of the display screen exceeds a ...

Подробнее
11-01-2018 дата публикации

METHOD FOR DEPICTING AN OBJECT

Номер: US20180012394A1

The invention relates to technologies for visualizing a three-dimensional (3D) image. According to the claimed method, a 3D model is generated, images of an object are produced, a 3D model is visualized, the 3D model together with a reference pattern and also coordinates of texturing portions corresponding to polygons of the 3D model are stored in a depiction device, at least one frame of the image of the object is produced, the object in the frame is identified on the basis of the reference pattern, a matrix of conversion of photo image coordinates into dedicated coordinates is generated, elements of the 3D model are coloured in the colours of the corresponding elements of the image by generating a texture of the image sensing area using the coordinate conversion matrix and data interpolation, with subsequent designation of the texture of the 3D model. 116-. (canceled)17. A method of displaying a virtual object on a computing device , comprising a memory , a camera , and a display , the memory being adapted to store at least one reference image and at least one 3D model , wherein each reference image being associated with one 3D model , the method comprising:acquiring an image from the camera,recognizing the virtual object on the acquired image based upon a reference image,forming a 3D model associated with the reference image,forming a transformation matrix for juxtaposing coordinates of the acquired image with coordinates of the 3D model;juxtaposing coordinates of texturized sections of the acquired image to corresponding sections of the 3D model;painting the sections of the 3D model using colors and textures of the corresponding sections of the acquired image, anddisplaying the 3D model over a video stream using augmented reality tools and/or computer vision algorithms.18. The method of claim 17 , wherein the 3D model is represented by polygons.19. The method of claim 18 , wherein the transformation matrix is adapted to juxtapose coordinates of the texturized ...

Подробнее
11-01-2018 дата публикации

IMMERSIVE CONTENT FRAMING

Номер: US20180012397A1
Автор: Carothers Trevor
Принадлежит:

A virtual view of a scene may be generated through the use of various systems and methods. In one exemplary method, from a tiled array of cameras, image data may be received. The image data may depict a capture volume comprising a scene volume in which a scene is located. A viewing volume may be defined. A virtual occluder may be positioned at least partially within the capture volume such that a virtual window of the virtual occluder is between the viewing volume and the scene. A virtual viewpoint within the viewing volume may be selected. A virtual view may be generated to depict the scene from the virtual viewpoint. 1. A method for generating a virtual view of a scene , the method comprising:from a tiled array of cameras, receiving image data depicting a capture volume comprising a scene volume having a scene;at a processor, defining a scene volume within the capture volume, the scene volume having a scene;at the processor, defining a viewing volume;at the processor, positioning a virtual occluder at least partially within the capture volume such that a virtual window of the virtual occluder is between the viewing volume and the scene;at an input device, receiving input selecting a virtual viewpoint within the viewing volume; andat the processor, generating a virtual view depicting the scene from the virtual viewpoint.2. The method of claim 1 , wherein the tiled array of cameras comprises a plurality of cameras arranged in a planar array.3. The method of claim 1 , wherein the tiled array of cameras comprises a plurality of cameras arranged in a semispherical array claim 1 , with each of the cameras oriented toward a center of the semispherical array.4. The method of claim 1 , wherein the tiled array of cameras comprises a plurality of cameras arranged in a semispherical array claim 1 , with each of the cameras oriented away from a center of the semispherical array.5. The method of claim 1 , wherein the virtual window is positioned after selection of the virtual ...

Подробнее
11-01-2018 дата публикации

METHODS AND SYSTEMS OF PERFORMING EYE RECONSTRUCTION USING A PARAMETRIC MODEL

Номер: US20180012418A1
Принадлежит:

Systems and techniques for reconstructing one or more eyes using a parametric eye model are provided. The systems and techniques may include obtaining one or more input images that include at least one eye. The systems and techniques may further include obtaining a parametric eye model including an eyeball model and an iris model. The systems and techniques may further include determining parameters of the parametric eye model from the one or more input images. The parameters can be determined to fit the parametric eye model to the at least one eye in the one or more input images. The parameters include a control map used by the iris model to synthesize an iris of the at least one eye. The systems and techniques may further include reconstructing the at least one eye using the parametric eye model with the determined parameters. 1. A computer-implemented method of reconstructing one or more eyes , comprising:obtaining one or more input images, the one or more input images including at least one eye;obtaining a parametric eye model, the parametric eye model including an eyeball model and an iris model;determining parameters of the parametric eye model from the one or more input images, the parameters being determined to fit the parametric eye model to the at least one eye in the one or more input images, wherein the parameters include a control map used by the iris model to synthesize an iris of the at least one eye; andreconstructing the at least one eye using the parametric eye model with the determined parameters.2. The method of claim 1 , wherein the one or more input images include a three-dimensional face scan of at least a portion of a face including the at least one eye claim 1 , the three-dimensional face scan being from a multi-view scanner.3. The method of claim 2 , wherein the parameters include a shape parameter corresponding to a shape of an eyeball of the at least one eye claim 2 , and wherein determining the shape parameter includes fitting the ...

Подробнее
16-01-2020 дата публикации

Apparatus and method of mapping a virtual environment

Номер: US20200016499A1

A method of mapping a virtual environment includes: obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; where for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.

Подробнее
21-01-2016 дата публикации

Armature and Character Template for Motion Animation Sequence Generation

Номер: US20160019708A1
Принадлежит:

Embodiments of the invention are directed an animation kit including a template page with at least one template design, an armature that moves between at least a first position and a second position, and an animation application that generates an animated segment corresponding to the template design and at least one pose of the armature. In further embodiments, a method for generating an animated segment is provided. In another embodiment, a system for generating an animated sequence includes a template design and an application that receives an image of the template design and animates at least one three-dimensional image corresponding to the captured template design. 1. A system for generating an animated sequence comprising: (1) an image receiving component configured to receive an image captured by a camera of the mobile device,', '(2) a template content component configured to apply a received image of colored two-dimensional template image content corresponding to a user-selected character,', '(3) a transition component for generating one or more transition poses corresponding to two or more received character poses, and', '(4) a sequence compilation component; and, 'an animation-generating application executed by a computing device having a processor and a memory, the animation-generating application comprisingone or more coloring pages having template content configured to receive one or more colored markings printed thereon, the template content comprising at least one view of a two-dimensional character image.2. The system of claim 1 , wherein the image receiving component scans the template content to read the two-dimensional character image and the one or more colored markings.3. The system of claim 2 , wherein the sequence compilation component is configured to apply the one or more colored markings to generate an animated character based on the two-dimensional character image.4. The system of claim 1 , further comprising an accessory component ...

Подробнее
21-01-2016 дата публикации

Method and device for inserting a 3d graphics animation in a 3d stereo content

Номер: US20160019724A1
Принадлежит: Thomson Licensing SAS

The invention concerns a method and a device for inserting 3D graphic animation in a 3D image, each 3D graphic element of the graphic animation being defined in size and in depth for the insertion in a determined insertion zone of said 3D image. The method comprises the step of determining for the graphic element to be inserted a depth range with a maximum allowed depth value, replacing the out of range depth values by the maximum allowed depth value when depth values of the graphic element are out of range and compensating the depth difference between the depth values of the graphic element and the maximal allowed depth value in reducing the graphic element in size proportionally to the reduction of depth for the graphic element.

Подробнее
03-02-2022 дата публикации

Animation production system

Номер: US20220036615A1
Принадлежит: Anicast RM Inc

To enable you to take animations in a virtual space. An, an animation production method comprising: placing a first and second objects and a virtual camera in a virtual space; controlling an action of the first object in response to an operation from the first user; controlling an action of the second object in response to an operation from the second user; and controlling the camera in response to an operation from the first or second user or the second user to shoot the first and second objects.

Подробнее
03-02-2022 дата публикации

ANIMATION PRODUCTION SYSTEM

Номер: US20220036616A1
Принадлежит:

To take animations in a virtual space, an animation production system comprising: a virtual camera located in a virtual space; a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted; and an operation panel that starts the animation production function displayed in response to the input. 1. An animation production system comprising:a virtual camera located in a virtual space;a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user has mounted; andan operation panel that starts an animation production function displayed in response to the input, the operation panel placed within the virtual space.2. The animation production system according to claim 1 , whereinthe operation panel comprises:an information display unit; anda lever for scrolling information displayed on the information display unit in response to an operation performed while gripping.3. The animation production system according to claim 1 , the animation production system further comprising:an operation processing unit that displays the operation panel in a position visible to the user in response to the input.4. The animation production system according to claim 3 , wherein the position is where no other object is placed in the virtual space. The present invention relates to an animation production system.Virtual cameras are arranged in a virtual space (see Patent Document 1).[PTL 1] Patent Application Publication No. 2017-146651No attempt was made to capture animations in the virtual space.The present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.The principal invention for solving the above-described problem is an animation production system comprising: a virtual camera located in a virtual space; a user input detection unit that detects ...

Подробнее
03-02-2022 дата публикации

AUDIO-SPEECH DRIVEN ANIMATED TALKING FACE GENERATION USING A CASCADED GENERATIVE ADVERSARIAL NETWORK

Номер: US20220036617A1
Принадлежит: TATA CONSULTANCY SERVICES LIMITED

Conventional state-of-the-art methods are limited in their ability to generate realistic animation from audio on any unknown faces and cannot be easily generalized to different facial characteristics and voice accents. Further, these methods fail to produce realistic facial animation for subjects which are quite different than that of distribution of facial characteristics network has seen during training. Embodiments of the present disclosure provide systems and methods that generate audio-speech driven animated talking face using a cascaded generative adversarial network (CGAN), wherein a first GAN is used to transfer lip motion from canonical face to person-specific face. A second GAN based texture generator network is conditioned on person-specific landmark to generate high-fidelity face corresponding to the motion. Texture generator GAN is made more flexible using meta learning to adapt to unknown subject's traits and orientation of face during inference. Finally, eye-blinks are induced in the final animation face being generated. 1. A processor implemented method for generating audio-speech driven animated talking face using a cascaded generative adversarial network , the method comprising:obtaining, via one or more hardware processors, an audio speech and a set of identity images (SI) of a target individual;extracting, via the one or more hardware processors, one or more DeepSpeech features of the target individual from the audio speech;generating, using the extracted DeepSpeech features, via a first generative adversarial network (FGAN) of a cascaded GAN executed by the one or more hardware processors, a speech-induced motion (SIM) on a sparse representation (SR) of a neutral mean face, wherein the SR of the SIM comprises a plurality of facial landmark points with one or more finer deformations of lips;generating, via the one or more hardware processors, a plurality of eye blink movements from random noise input learnt from a video dataset, wherein the ...

Подробнее
03-02-2022 дата публикации

THREE-DIMENSIONAL EXPRESSION BASE GENERATION METHOD AND APPARATUS, SPEECH INTERACTION METHOD AND APPARATUS, AND MEDIUM

Номер: US20220036636A1
Автор: BAO Linchao, LIN Xiangkai
Принадлежит:

This application provides a three-dimensional (3D) expression base generation method performed by a computer device. The method includes: obtaining image pairs of a target object in n types of head postures, each image pair including a color feature image and a depth image in a head posture; constructing a 3D human face model of the target object according to then image pairs; and generating a set of expression bases of the target object according to the 3D human face model of the target object. According to this application, based on a reconstructed 3D human face model, a set of expression bases of a target object is further generated, so that more diversified product functions may be expanded based on the set of expression bases. 1. A computer-implemented method performed by a computer device , the method comprising:{'sup': th', 'th, 'obtaining n sets of image pairs of a target object inn types of head postures, the n sets of image pairs comprising color feature images and depth images in the n types of head postures, an ihead posture being corresponding to an iset of image pair, n being a positive integer, 0 Подробнее

17-01-2019 дата публикации

METHODS AND SYSTEMS FOR DISPLAYING UI ELEMENTS IN MIXED REALITY ENVIRONMENTS

Номер: US20190018498A1
Принадлежит:

A method for improving a display of a user interface element in a mixed reality environment is disclosed. A request to display the user interface element is received. The request includes display instructions, angle threshold data, distance threshold data, and velocity threshold data. Display operations are continuously performed while sensor data is continuously received from a mixed reality user interface device. The display operations include displaying the user interface element according to the display instructions, and, based on the sensor data indicating a distance between the user interface element and the mixed reality user interface device in the mixed reality environment has exceeded a distance threshold or based on the sensor data indicating an angle of view of the mixed reality user interface device has exceeded an angle threshold with respect to the user interface element in the mixed reality environment, hiding the user interface element. 1. A system comprising:one or more computer processors;one or more computer memories;a set of instructions incorporated into the one or more computer memories, the set of instructions configuring the one or more computer processors to perform operations for improving a display of a user interface element in a mixed reality environment, the operations comprising:receiving a request to display the user interface element, the request including display instructions, angle threshold data, distance threshold data, and velocity threshold data; based on the sensor data indicating that a motion of the mixed reality user interface device is below the velocity threshold, displaying the user interface element according to the display instructions; and', 'based on the sensor data indicating a distance between the user interface element and the mixed reality user interface device in the mixed reality environment has exceeded a distance threshold or based on the sensor data indicating an angle of view of the mixed reality user ...

Подробнее
17-01-2019 дата публикации

Navigation Application with Novel Declutter Mode

Номер: US20190018547A1
Принадлежит: Apple Inc.

Some embodiments provide a navigation application with a novel declutter navigation mode. In some embodiments, the navigation application has a declutter control that when selected, directs the navigation application to simplify a navigation presentation by removing or de-emphasizing non-essential items that are displayed in the navigation presentation. In some embodiments, the declutter control is a mode-selecting control that allows the navigation presentation to toggle between normal first navigation presentation and a simplified second navigation presentation, which below is also referred to as a decluttered navigation presentation. During normal mode operation, the navigation presentation of some embodiments provides (1) a representation of the navigated route, (2) representations of the roads along the navigated route, (3) representation of major and minor roads that intersect or are near the navigated route, and (4) representations of buildings and other objects in the navigated scene. However, in the declutter mode, the navigation presentation of some embodiments provides a representation of the navigated route, while providing a de-emphasized presentation of the roads that intersect the navigated route or are near the navigated route. In some embodiments, the presentation shows the major roads that are not on the route with more emphasis than minor roads not on the route. Also, in some embodiments, the presentation fades out the minor roads not on the route more quickly than fading out the major roads not on the route. 1. (canceled)2. A method of providing a navigation presentation , the method being implemented by a device navigating a route to a destination , the method comprising:providing a simplified navigation presentation for navigating to the destination, the simplified navigation presentation being generated by removing at least one geometry from a default navigation presentation for navigating to the destination;identifying a point of interest in ...

Подробнее
17-01-2019 дата публикации

Interactive Cinemagrams

Номер: US20190019320A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.

Подробнее
17-01-2019 дата публикации

SKINNING A CLUSTER BASED SIMULATION WITH A VISUAL MESH USING INTERPOLATED ORIENTATION AND POSITION

Номер: US20190019345A1
Принадлежит:

Embodiments of the present invention provide a method for simulating deformable solids undergoing large plastic deformation and topological changes using shape matching. Positional information for particles and orientation information from clusters is used to simulate deformable solids represented by particles. Each visual vertex stores references to particles that influence the vertex, and stores the local position of the particles. A two-step method interpolates orientation from clusters to particles, and uses the orientation and position of particles to skin the visual mesh vertices. This results in a fast method that can reproduce rotation and does not require the visual mesh vertex to be located within a convex hull of particles. 1. A computer-implemented method for skinning a visual mesh in a cluster based computer graphics simulation , said method comprising:storing a list of particles in memory that influence a visual vertex of the visual mesh, wherein the list of particles comprises local positions of the particles;determining a cluster of the particles that satisfies a shape matching constraint, wherein the cluster of particles comprises orientation information;interpolating orientations of the particles based on the orientation information;interpolating a position of the visual vertex based on the local positions of the particles, a local position of the visual vertex, and the orientations the particles; andskinning the visual mesh to the particles using the position of the visual vertex.2. A method as described in claim 1 , wherein the position of the visual vertex is computed using the formula y=Σw(Qlz+xl) claim 1 , wherein Q claim 1 , represents an average quaternion of a particle i claim 1 , lrepresents a particle index claim 1 , zrepresents the local position of the visual vertex claim 1 , and wrepresents a weight of the visual vertex.3. A method as described in claim 1 , wherein the interpolating orientations of the particles based on the ...

Подробнее
21-01-2021 дата публикации

BLENDSHAPE COMPRESSION SYSTEM

Номер: US20210019916A1
Принадлежит:

The systems and methods described herein can pre-process a blendshape matrix via a global clusterization process and a local clusterization process. The pre-processing can cause the blendshape matrix to be divided into multiple blocks. The techniques can further apply a matrix compression technique to each block of the blendshape matrix to generate a compression result. The matrix compression technique can comprise a matrix approximation step, an accuracy verification step, and a recursive compression step. The compression result for each block may be combined to generate a compressed blendshape matrix for rendering a virtual entity. 120-. (canceled)21. A computer-implemented method for rendering a three-dimensional virtual entity during runtime execution of a game application , comprising: receiving user input associated with the virtual entity in a virtual environment of the game application, wherein the virtual entity comprises a plurality of blendshapes;', 'determining movement of a virtual entity based, at least in part, on the user input;', 'identifying a first pose of a virtual entity for rendering within the virtual environment based on the determined movement;', 'identifying a compressed blendshape matrix associated with rendering the virtual entity;', 'accessing the compressed blendshape matrix, wherein the compressed blendshape matrix includes approximations of an uncompressed blendshape matrix;', 'determining movements of a set of the plurality of blendshapes of the virtual entity based on the compressed blendshape matrix; and', 'rendering the first pose based at least in part on the determined movements of the set of blendshapes., 'under control of one or more hardware computing devices configured with specific computer-executable instructions, the specific computer-executable instructions stored in an electronic hardware memory,'}22. The computer-implemented method of claim 21 , wherein a mesh of the virtual entity includes the plurality of blendshapes ...

Подробнее
21-01-2021 дата публикации

ILLUMINATION EFFECTS FROM LUMINOUS INSERTED CONTENT

Номер: US20210019935A1
Принадлежит:

Systems and methods for generating illumination effects for inserted luminous content, which may include augmented reality content that appears to emit light and is inserted into an image of a physical space. The content may include a polygonal mesh, which may be defined in part by a skeleton that has multiple joints. Examples may include generating a bounding box on a surface plane for the inserted content, determining an illumination center point location on the surface plane based on the content, generating an illumination entity based on the bounding box and the illumination center point location, and rendering the illumination entity using illumination values determined based on the illumination center point location. Examples may also include determining illumination contributions values for some of the joints, combining the illumination contribution values to generate illumination values for pixels, and rendering another illumination entity using the illumination values. 1. A method comprising:determining a location within an image to insert content;generating a bounding box on a surface plane for the inserted content;determining an illumination center point location on the surface plane based on the inserted content;generating an illumination entity based on the bounding box and the illumination center point location; andrendering the illumination entity using illumination values determined at least in part based on the illumination center point location.2. The method of claim 1 , wherein the content includes luminous content and the illumination entity is generated to visually represent light emitted by the luminous content.3. The method of claim 1 , wherein the content includes a skeletal animation model having a plurality of skeletal joints.4. The method of claim 3 , wherein the generating a bounding box on the surface plane for the inserted content includes generating the bounding box based on the plurality of skeletal joints.5. The method of claim 3 , ...

Подробнее
28-01-2016 дата публикации

CLOUD BASED OPERATING SYSTEM AND BROWSER WITH CUBE INTERFACE

Номер: US20160026359A1
Принадлежит:

A user interface and or browser is provided having a tilted cube hexagonal structure. The cube is rotatable and has advertisements and theme based window panes on the back and the front of the panes. The cube is rotated in such a fashion that the tilted orientation is maintained constant but the panes revolved about a tilted axis much as the Earth revolves about its axis. Computer software coordinates the responses of keyboard, mouse and other devices as they interact with the cube itself so as to provide a stimulating 3D graphical user interface. The system is applicable to mobile devices such as smart phones and similar devices. 1. A three dimensional browser interface comprising:a rotatable object having 'a category theme.', 'a pane with'}2. The three dimensional browser interface of claim 1 , further comprising:a link within the pane.3. The three dimensional browser interface of claim 1 , further comprising:an icon within the pane.4. The three dimensional browser interface of claim 3 , further comprising:an interactive link associated with the icon.5. The three dimensional browser interface of claim 1 , wherein the rotatable object is a polygon.6. The three dimensional browser interface of claim 1 , further comprising:a plurality of panes connected together wherein each pane is connected to only 2 adjacent panes along edges thereof in a regular polygon structure.7. A method of creating a graphical user interface GUI comprising the steps of:creating a 3D model of the GUIgenerating a 3D animation layout of the GUIcreating a 3D rendering of the GUI andcombining the rendering of the GUI with a user interface routine.8. The method of creating a graphical user interface GUI of claim 7 , wherein the user interface routine is a mouse interaction routine.9. The method of creating a graphical user interface GUI of claim 7 , wherein the user interface routine is a keyboard interaction routine.10. The method of creating a graphical user interface GUI of claim 7 , wherein ...

Подробнее
28-01-2016 дата публикации

Methods for Capturing Images of a Control Object and Tracking to Control Interfacing with Video Game Objects

Номер: US20160027188A1
Автор: Marks Richard L.
Принадлежит:

Methods for real time motion capture for controlling an object in a video game are provided. One method includes defining a model of a control object and identifying a marker on the control object. The method also includes capturing movement associated with the control object with a video capture device. Then, interpreting the movement associated with the control object to change a position of the model based on data captured through the video capture device, wherein the data captured includes the marker. The method includes moving the video game object presented on the display screen in substantial real-time according to the change of position of the model. 1. A method for real-time motion capture for control of a video game object during game play , comprising:defining a model of a control object;identifying a marker on the control object;capturing movement associated with the control object with a video capture device; andinterpreting the movement associated with the control object to change a position of the model based on data captured through the video capture device, the data captured including the marker; andmoving the video game object presented on the display screen in substantial real-time according to the change of position of the model.2. The method of claim 1 , wherein the control object is a hand of a person claim 1 , the hand being tracked over time to capture changes claim 1 , such that changes in the hand being tracked enable manipulations of the video game object claim 1 , wherein the manipulations include the moving.3. The method of claim 1 , wherein the method operation of capturing movement associated with the control object includes claim 1 ,capturing movement associated with an object being controlled by the control object.4. The method of claim 1 , further comprising:continuing to capture movement associated with the control object, interpret the movement associated with the control object to change a position of the model and control ...

Подробнее
28-01-2016 дата публикации

ANIMATED AUDIOVISUAL EXPERIENCES DRIVEN BY SCRIPTS

Номер: US20160027198A1
Принадлежит:

In an embodiment, a computerized method comprises receiving a meta-language file comprising a conversion of a script file in a natural language format, the script file including a plurality of natural language statements; interpreting, by a first computing device, the meta-language file to execute at least a first portion of the meta-language file; dynamically generating and displaying, on the first computing device, one or more visually animated graphical elements in accordance with the execution of at least the first portion of the meta-language file. 1. A computerized method , comprising:receiving a meta-language file comprising a conversion of a script file in a natural language format, the script file including a plurality of natural language statements;interpreting, by a first computing device, the meta-language file for execution of at least a first portion of the meta-language file;dynamically generating and displaying, on the first computing device, one or more visually animated graphical elements in accordance with the execution of the at least a first portion of the meta-language file;in response to a user interactive action taken on the one or more visually animated graphical elements of the at least a first portion of the meta-language file, by a first user on the first computing device, setting at least one parameter based on the user interactive action;wherein setting the at least one parameter causes a third meta-language file associated with the script file to particularly execute for a second user that is different from execution of the third meta-language file for the second user when the at least one parameter is not set;receiving a second meta-language file comprising a conversion of a second script file in a natural language format, the second script file including a plurality of natural language statements, the second script file separate and different from the script file;interpreting the second meta-language file for particular execution of ...

Подробнее
10-02-2022 дата публикации

BEHAVIOR DATA PROCESSING SYSTEM

Номер: US20220043505A1
Автор: CHUNG WEI-KAI
Принадлежит:

A behavior data processing system includes a sensor, an interaction detector, and a VR head mounted device. The sensor is wore on a user for sensing the positional variation track of the user for generating a motion behavior signal. The interaction detector is disposed on a body model. The interaction detector, according to the interaction relationship between the user and the body model, identifies an interaction behavior signal. The VR head mounted device, according to the motion behavior signal sensed by the sensor and the interaction behavior signal detected by the interaction detector, accurately simulates the interaction relationship between the user and the virtual character, so as to increase the realistic experience of the simulation. 1. A data processing system , comprising:a sensor wore by a user for sensing a variation tracks of positions of the user for generating a motion behavior signal; the sensor comprising a motion detection module and a first communication module; the motion detection module identifying the variation track of the positions of the user according to variations selected from a group consisting of a pose, direction, angle, movement, and speed of the user, and accordingly generating the motion behavior signal; the first communication module being applied for transmitting the motion behavior signal;an interaction detector disposed on a body model for detecting a plurality of motions of the user contacting the body model and identifying an interaction behavior signal according to the motions; the interaction detector comprising an interaction detection module and a second communication module; the interaction detection module, according to a pressurization position, variations of pressure, temperature, and moisture generated by the user touching the body model, identifying the interaction behavior signal; the second communication module being applied for transmitting the interaction behavior signal; anda VR head mounted device coupled ...

Подробнее
25-01-2018 дата публикации

Rigging for non-rigid structures

Номер: US20180025525A1
Принадлежит: PIXAR

Techniques for animating a non-rigid object in a computer graphics environment. A three-dimensional (3D) curve rigging element representing the non-rigid object is defined, the 3D curve rigging element comprising a plurality of knot primitives. One or more defined values are received for an animation control attribute of a first knot primitive. One or more values are generated, for a second animation control attribute for a second knot primitive, based on the plurality of animation control attributes of a neighboring knot primitive. An animation is then rendered using the 3D curve rigging element. More specifically, one or more defined values for the first attribute of the first knot primitive and the generated value for the second attributes of the second knot primitive are used to generate the animation. The rendered animation is output for display.

Подробнее
10-02-2022 дата публикации

ANIMATION PRODUCTION SYSTEM

Номер: US20220044462A1
Принадлежит:

To take animations in a virtual space an animation production method comprising: a step of placing a virtual camera in a virtual space; a step of placing one or more objects in the virtual space; a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted; a step of accepting at least one choice of the object in response to the input; and a step of removing the object from the virtual space in response to the input. 1a step of placing a virtual camera in a virtual space;a step of placing one or more objects in the virtual space;a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted;a step of accepting at least one choice of the object in response to the input; anda step of removing the object from the virtual space in response to the input.. An animation production method comprising: This is a continuation application of U.S. patent application Ser. No. 17/008,387 filed Aug. 31, 2020, which claims the priority benefit of Japan Patent Application, Serial No. JP2020-128309, filed Jul. 29, 2020, the disclosure of which is incorporated herein by reference.The present invention relates to an animation production system.Virtual cameras are arranged in a virtual space (see Patent Document 1).No attempt was made to capture animations in the virtual space.The present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.The principal invention for solving the above-described problem is an animation production method comprising: placing a virtual camera in a virtual space; placing one or more objects in the virtual space; a user input detector for detecting the user input from at least one of a head mount display and a controller mounted by a user; receiving at least one choice of the object in response to the input; ...

Подробнее
10-02-2022 дата публикации

SPEECH-DRIVEN ANIMATION METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE

Номер: US20220044463A1
Принадлежит:

Embodiments of this application disclose a speech-driven animation method and apparatus based on artificial intelligence (AI). The method includes obtaining a first speech, the first speech comprising a plurality of speech frames; determining linguistics information corresponding to a speech frame in the first speech, the linguistics information being used for identifying a distribution possibility that the speech frame in the first speech pertains to phonemes; determining an expression parameter corresponding to the speech frame in the first speech according to the linguistics information; and enabling, according to the expression parameter, an animation character to make an expression corresponding to the first speech. 1. A speech-driven animation method , performed by an audio and video processing device , the method comprising:obtaining a first speech, the first speech comprising a plurality of speech frames;determining linguistics information corresponding to a speech frame in the first speech, the linguistics information being used for identifying a distribution possibility that the speech frame in the first speech pertains to phonemes;determining an expression parameter corresponding to the speech frame in the first speech according to the linguistics information; andenabling, according to the expression parameter, an animation character to make an expression corresponding to the first speech.2. The method according to claim 1 , wherein a target speech frame is a speech frame in the first speech claim 1 , and the determining an expression parameter corresponding to the speech frame in the first speech according to the linguistics information comprises:determining a speech frame set in which the target speech frame is located, the speech frame set comprising the target speech frame and speech frames preceding and succeeding the target speech frame; anddetermining an expression parameter corresponding to the target speech frame according to linguistics ...

Подробнее
10-02-2022 дата публикации

Textured mesh building

Номер: US20220044479A1
Принадлежит: Snap Inc

Systems and methods are provided for receiving a two-dimensional (2D) image comprising a 2D object; identifying a contour of the 2D object; generating a three-dimensional (3D) mesh based on the contour of the 2D object; and applying a texture of the 2D object to the 3D mesh to output a 3D object representing the 2D object.

Подробнее
29-01-2015 дата публикации

MOTION CONTROL OF ACTIVE DEFORMABLE OBJECTS

Номер: US20150029198A1
Принадлежит: PIXAR

Techniques are proposed for animating a deformable object. A geometric mesh comprising a plurality of vertices is retrieved, where the geometric mesh is related to a first rest state configuration corresponding to the deformable object. A motion goal associated with the deformable object is then retrieved. The motion goal is translated into a function of one or more state variables associated with the deformable object. A second rest state configuration corresponding to the deformable object is computed by adjusting the position of at least one vertex in the plurality of vertices based at least in part on the function. 1. A method of animating a deformable object , the method comprising:retrieving a geometric mesh comprising a plurality of vertices related to a first rest state configuration corresponding to the deformable object;retrieving a motion goal associated with the deformable object;translating the motion goal into a function of one or more state variables associated with the deformable object; andcomputing a second rest state configuration corresponding to the deformable object by adjusting the position of at least one vertex in the plurality of vertices based at least in part on the function.2. The method of claim 1 , further comprising:generating a regularizing potential comprising a smoothing function associated with the second rest state configuration; andapplying the regularizing potential to the second rest state configuration to create a third rest state configuration corresponding to the deformable object.3. The method of claim 1 , further comprising deforming the geometric mesh based on the second rest state configuration.4. The method of claim 1 , wherein the at least one vertex is internal to the deformable object.5. The method of claim 1 , wherein the geometric mesh encloses the deformable object such that each vertex in the plurality of vertices is external to the deformable object.6. The method of claim 1 , wherein the position of the at ...

Подробнее
23-01-2020 дата публикации

AUGMENTED REALITY (AR) BASED FAULT DETECTION AND MAINTENANCE

Номер: US20200026257A1
Принадлежит: Accenture Global Solutions Limited

An AR based fault detection and maintenance system analyzes real-time video streams from a remote user device to identify a specific context level at which a user is to handle an equipment and provides instructions corresponding to the specific context level. The instructions enable generating AR simulations that guide the user in executing specific operations including repairs on faulty components of the equipment. The received video stream is initially analyzed to identify a particular equipment which is to be handled by the user. Fault prediction procedures are executed to identify faults associated with the equipment. The instructions to handle the faults are transmitted to the user device as AR simulations that provide step-by-step simulations that enable the user to execute operations as directed by the instructions. 1. An Augmented Reality (AR)-based fault detection and maintenance system comprising:at least one processor;a non-transitory computer readable medium storing machine-readable instructions that cause the at least one processor to:receive real-time video feed from a remote user device, the real-time video feed transmitting video of a facility including equipment;identify from the real-time video feed, using a trained AI-based object identifier, a faulty equipment to be worked on by a user associated with the user device;further obtain using the trained AI-based object identifier, an input image from the real-time video feed, the input image including a component to be repaired within the faulty equipment;classify the input image into one of a plurality of fault classes using an AI-based fault identifier;detect a fault associated with the component in the input image using historical data specific to the equipment and further based on weights associated with attributes of the component;determine serially, one of a plurality of context levels at which the fault has been detected based at least on the real-time video feed; andenable providing a ...

Подробнее
24-01-2019 дата публикации

CONVERSION OF 2D DIAGRAMS TO 3D RICH IMMERSIVE CONTENT

Номер: US20190026931A1
Принадлежит:

Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display. 1. A computer-implemented method for generation of a three-dimensional (3D) animation , the method being executed by one or more processors and comprising:receiving a user input defining a two-dimensional (2D) representation of a plurality of elements;processing, by the one or more processors, the 2D representation to classify each of the plurality of elements as one of a (i) symbolic element for which a visual representation is to be generated in the 3D animation, or (ii) an action element that represents a trajectory of a corresponding visual representation that is to be animated in the 3D animation, and for which no visual representation is to be generated in the 3D animation;generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules, a 3D animation corresponding to the 2D representation, wherein the 3D animation includes one or more visual representations corresponding to one or more of the symbolic elements, animated according to a corresponding trajectory represented by one or more of the action elements, and does not include a visual representation corresponding to the one or more of the action elements themselves; andtransmitting, by the one or more processors, the 3D animation to an extended reality device for display.2. The method of claim 1 , further ...

Подробнее
24-01-2019 дата публикации

DEVICE, PROGRAM, AND INFORMATION PROCESSING METHOD

Номер: US20190026932A1
Принадлежит: DENTSU INC.

The present invention is designed so that, even when a CG model moves with a tempo, it is possible to realize natural movement. The device according to one aspect of the present invention has a control section that exerts control so that specific data, which corresponds to a case where a predetermined parameter has a specific value, is generated using a plurality of pieces of data corresponding to respective cases where the predetermined parameter has different values, and a playback section that reproduces a predetermined computer graphics (CG) model based on the specific data. 1. A device comprising:a control section that exerts control so that specific data, which corresponds to a case where a predetermined parameter has a specific value, is generated using a plurality of pieces of data corresponding to respective cases where the predetermined parameter has different values; anda playback section that reproduces a predetermined computer graphics (CG) model based on the specific data.2. The device according to claim 1 , wherein the predetermined parameter is at least one of BPM (Beats Per Minute) and a parameter that indicates a characteristic of motion.3. The device according to claim 1 , wherein the control section generates the specific data claim 1 , using claim 1 , among the plurality of pieces of data claim 1 , data that is generated so that claim 1 , in first data claim 1 , which corresponds to a case the predetermined parameter has a first value claim 1 , the first value matches the specific value claim 1 , and data that is generated so that claim 1 , in second data claim 1 , which corresponds to a case the predetermined parameter has a second value claim 1 , the second value matches the specific value.4. The device according to claim 1 , wherein the plurality of pieces of data are included in channels of the same data structure conforming to SMF (standard MIDI File) format.5. The device according to claim 1 , wherein the predetermined parameter is ...

Подробнее
25-01-2018 дата публикации

EMOTIONAL REACTION SHARING

Номер: US20180027307A1
Принадлежит:

One or more computing devices, systems, and/or methods for emotional reaction sharing are provided. For example, a client device captures video of a user viewing content, such as a live stream video. Landmark points, corresponding to facial features of the user, are identified and provided to a user reaction distribution service that evaluates the landmark points to identify a facial expression of the user, such as a crying facial expression. The facial expression, such as landmark points that can be applied to a three-dimensional model of an avatar to recreate the facial expression, are provided to client devices of users viewing the content, such as a second client device. The second client device applies the landmark points of the facial expression to a bone structure mapping and a muscle movement mapping to create an expressive avatar having the facial expression for display to a second user. 1. A method of emotional reaction sharing , the method involving a computing device comprising a processor , and the method comprising: responsive to determining that a user is viewing content through a client device, initializing a camera of the client device to capture one or more frames of video of the user;', 'evaluating a first frame of the video to identify a set of facial features of the user;', 'generating a set of landmark points, within the first frame, representing the set of facial features; and', 'sending the set of landmark points to a user reaction distribution service for identifying a facial expression of the user, based upon the set of landmark points, for display through a second client device to a second user., 'executing, on the processor, instructions that cause the computing device to perform operations, the operations comprising2. The method of claim 1 , wherein the set of landmark points comprise coordinates of between about 4 landmark points to about 240 landmark points claim 1 , a landmark point specifying a location of a facial feature.3. The ...

Подробнее
28-01-2021 дата публикации

Systems and Methods for Animation Generation

Номер: US20210027511A1
Принадлежит: LoomAi, Inc.

Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction. 1. A method for generating animation from audio , the method comprising:receiving input audio data;generating an embedding for the input audio data;generating a plurality of predictions for a plurality of tasks from the generated embedding, wherein the plurality of predictions comprises at least one of blendshape weights, event detection, and voice activity detection;generating a final prediction from the plurality of predictions, wherein the final prediction comprises a set of blendshape weights; andgenerating an output based on the generated final prediction.2. The method of claim 1 , wherein the input audio data comprises mel-frequency cepstral coefficient (MFCC) features.3. The method of claim 2 , wherein generating the embedding comprises utilizing at least one of a recurrent neural network and a convolutional neural network to generate the embedding based on the MFCC features.4. The method of claim 1 , wherein generating the plurality of predictions comprises utilizing a multi-branch decoder claim 1 , wherein the multi-branch decoder comprises a plurality of Long Short Term Memory networks (LSTMs) that generate predictions for the plurality of tasks based on the generated embedding.5. The method of claim 1 , wherein generating the plurality of predictions ...

Подробнее
28-01-2021 дата публикации

PACK TILE

Номер: US20210027521A1
Автор: Kato Toshiaki
Принадлежит: DreamWorks Animation LLC

A method of facilitating an interactive rendering of a computer image at a remote computer includes: at a first time, obtaining first information of the image, including pixel information of the image at the first time; and, at a second time after the first time, obtaining second information of the image including pixel information of the image at the second time. Delta pixel information is generated by comparing the pixel information of the first information with the pixel information of the second information, to include one or more portions of the pixel information of the second information updated since the first information was obtained, and to exclude one or more portions of the pixel information of the second information unchanged since the first information was obtained. The method further includes: transmitting the delta pixel information in a lossless format to a front-end client to enable reconstruction of the second information. 1. A method of facilitating an interactive rendering of a computer image at a remote computer , the method comprising:at a first time, obtaining first information of the computer image, the first information comprising pixel information of the computer image at the first time;at a second time after the first time, obtaining second information of the computer image, the second information comprising pixel information of the computer image at the second time;generating delta pixel information by comparing the pixel information of the first information with the pixel information of the second information,wherein the delta pixel information is generated to include one or more portions of the pixel information of the second information that are updated since the first information was obtained, andwherein the delta pixel information is generated to exclude one or more portions of the pixel information of the second information that are unchanged since the first information was obtained; andtransmitting the delta pixel information in a ...

Подробнее
02-02-2017 дата публикации

SYSTEM AND METHOD FOR DIGITAL DELIVERY OF REVEAL VIDEOS FOR ONLINE GIFTING

Номер: US20170032437A1
Принадлежит:

An electronic gift (e-gift) giving system includes a first computing device that receives from a second computing device of a giver, e-gift information associated with an e-gift to be given to a recipient and reveal video information from the second computing device. From this information, the first computing device generates an interactive reveal video that, when displayed on a recipient computing device of the recipient, receives one or more user interface input actions and generates one or more tactile feedback actions to be performed by the second computing device in response to the user interface input action. 1. An electronic gift (e-gift) gifting system comprising:a first computing device comprising at least one processor; andat least one memory for storing an application executed on the at least one processor to:receive, by the first computing device, e-gift information associated with an e-gift, to be given to a recipient from a second computing device, wherein the e-gift is available for purchase from a merchant;generate, by the first computing device, a reveal presentation template for display;receive, by the first computing device, multimedia information from the second computing device, wherein the multimedia information includes user-supplied content related to the recipient and wherein the user-supplied content includes alpha-numeric text, photographs, audio content, video content, pre-recorded video content, animated content, or combinations thereof;generate, by the first computing device, a personalized interactive reveal presentation comprising the user-supplied content composited into one or more editable fields of the reveal presentation template and the personalized interactive reveal presentation comprising one or more other animations, one or more other HTML5 animations, one or more other image sequences, one or more other videos, or combinations thereof, wherein the personalized interactive reveal presentation receives a user interface input ...

Подробнее
02-02-2017 дата публикации

Methods and Systems for Providing a Preloader Animation for Image Viewers

Номер: US20170032568A1
Принадлежит:

Methods and systems for providing a preloader animation for image viewers is provided. An example method includes receiving an image of an object, determining an edge gradient value for pixels of the image, and selecting pixels representative of the object that have a respective edge gradient value above a threshold. The example method also includes determining a model of the object including an approximate outline of the object and structures internal to the outline that are oriented based on the selected pixels being coupling points between the structures, and providing instructions to display the model in an incremental manner so as to render given structures of the model over time. 1. A method comprising:downloading a three-dimensional (3D) object data model of an object;determining, by a computing device, a model of the object including an approximate outline of the object and structures internal to the outline; andproviding instructions to display the model in an incremental manner during the downloading of the 3D object data model so as to render given structures of the model over time during the downloading of the 3D object data model, wherein a duration of display of the model is about an amount of time to download the 3D object data model and a portion of the model displayed is indicative of a progress of the download of the 3D object data model.2. The method of claim 1 , wherein the structures internal to the outline include triangles claim 1 , and wherein determining the model of the object comprises determining a Delaunay triangulation of the object.3. The method of claim 1 , wherein the structures internal to the outline include one or more of circles claim 1 , cylinders claim 1 , or polygons.4. The method of claim 1 , wherein the instructions to display the model include one or more of a time to display given structures claim 1 , a time to fill the given structures claim 1 , a time to reveal a representation of the object claim 1 , and a time to fade ...

Подробнее
04-02-2016 дата публикации

THREE DIMENSIONAL ANIMATION OF A PAST EVENT

Номер: US20160035120A1
Автор: Delplace Jean-Charles
Принадлежит:

Methods and systems are disclosed for rendering a three dimensional animation of a past event. A request is received, at a processor, for re-creation of a past event associated with a job site. Input data is accessed, at the processor, regarding the past even associated with the job site wherein the input data pertains to past movements and lifts of at least one lifting device associated with the job site. A three dimensional (3D) animation is generated, at the processor, of the past event involving the past movements of the at least one lifting device. The 3D animation is displayed, on a display, depicting the past event wherein the displaying comprises playback controls for controlling the displaying. 1. A method for rendering a three dimensional animation of a past event , said method comprising:receiving a request, at a processor, for re-creation of a past event associated with a job site;accessing input data, at said processor, regarding said past even associated with said job site wherein said input data pertains to past movements and lifts of at least one lifting device associated with said job site;generating a three dimensional (3D) animation, at said processor, of said past event involving said past movements of said at least one lifting device; anddisplaying said 3D animation, on a display, depicting said past event wherein said displaying comprises playback controls for controlling said displaying.2. The method as recited in wherein said generating said 3D animation is a first instance of rendering said 3D animation of said past event.3. The method as recited in wherein said generating said 3D animation accesses a previously rendered 3D animation of said past event and said displaying is a replay of said previously rendered 3D animation of said past event.4. The method as recited in wherein said 3D animation comprises a plurality of lifting devices associated with said job site and at least one partially constructed building.5. The method as recited in ...

Подробнее
04-02-2016 дата публикации

METHOD, SYSTEM AND APPARATUS FOR PROVIDING VISUAL FEEDBACK OF A MAP VIEW CHANGE

Номер: US20160035121A1
Принадлежит: Apple Inc.

Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode. 1. A method , comprising: displaying a map view of a map in a map view mode in a map display, wherein said map view is displayed in a two-dimensional map view mode;', 'obtaining input indicating a change to a three-dimensional map view mode for the map view; and', 'in response to the input, displaying an animation, wherein said animation moves a virtual camera for the map display, wherein said displaying comprises moving the virtual camera to render three-dimensional data for the map at different virtual camera positions to illustrate that the map view mode has changed to the three-dimensional map view mode., 'performing, by a computing device2. The method of claim 1 , wherein said input comprises directions specifying a virtual camera position for the map display claim 1 , wherein said virtual camera ends the animation to render three-dimensional data for the map at the specified virtual camera position.3. The method of claim 1 , wherein said moving the virtual camera to render three-dimensional data for the map at different virtual camera positions comprises moving the virtual camera back and forth along a circular path to a plurality of virtual camera positions.4. The method of claim 1 , further comprising ...

Подробнее
01-05-2014 дата публикации

COMPUTER SYSTEM AND ASSEMBLY ANIMATION GENERATION METHOD

Номер: US20140118358A1
Автор: ENOMOTO Atsuko
Принадлежит: Hitachi, Ltd.

Provided is a technology for automatically generating a camera pose enabling the viewing of an operation of an object component in a work instruction animation. A primary inertia axis of an assembled item is calculated from inertia tensor information of a plurality of components constituting the assembled item. Adjacency relationship information indicating an adjacency relationship between the plurality of components is acquired. Based on the adjacency relationship information of the plurality of components, an assembly sequence and an assembly motion vector indicating an assembled direction of the plurality of components are generated such that each of the plurality of components does not interfere with a proximate component. Further, a plurality of camera eye sights each having a camera axis about the primary inertia axis and providing an operator's vision candidate during the generation of the assembly animation is arranged. 1. A computer system for generating an assembly animation showing an operation for assembling an assembled item based on an assembly sequence indicating a sequence of assembly of the assembled item , the computer system comprising:a processor for generating the assembly animation;a memory for storing information about the generated assembly animation; anda display for displaying the assembly animation in response to control by the processor,wherein the processor:acquires inertia tensor information of a plurality of components of the assembled item, and calculates a primary inertia axis of the assembled item from the inertia tensor information;acquires adjacency relationship information indicating an adjacency relationship between the plurality of components;generates, based on the adjacency relationship information between the plurality of components, the assembly sequence and an assembly motion vector indicating an assembled direction of the plurality of component such that each of the plurality of components does not interfere with a ...

Подробнее
01-02-2018 дата публикации

AUDIO-BASED CARICATURE EXAGGERATION

Номер: US20180033181A1
Принадлежит:

A method that uses at least one hardware processor for receiving a three-dimensional model of an object, receiving an audio sequence embodied as a digital file that comprises a musical composition, generating a video frame sequence, and synthesizing the audio sequence and the video frame sequence into an audiovisual clip. The three-dimensional model is embodied as a digital file that comprises a representation of the object. The generating step comprises computing a caricature of the object by applying a computerized caricaturization algorithm to the three-dimensional model. The computing has scaling gradient fields of surface coordinates of the three-dimensional model by a function of a Gaussian curvature of the surface, and finding a regular surface whose gradient fields fit the scaled gradient fields. The computing is with a different exaggeration factor for each of multiple ones of the video frames, and the different exaggeration factor is based on one or more parameters of the musical composition of the audio sequence. 1. A method comprising using at least one hardware processor for:receiving a three-dimensional model of an object, wherein the three-dimensional model is embodied as a digital file that comprises a representation of the object;receiving an audio sequence embodied as a digital file that comprises a musical composition; scaling gradient fields of surface coordinates of the three-dimensional model by a function of a Gaussian curvature of the surface, and', 'finding a regular surface whose gradient fields fit the scaled gradient fields,, 'generating a video frame sequence, wherein the generating comprises computing a caricature of the object by applying a computerized caricaturization algorithm to the three-dimensional model, wherein the computing compriseswherein (a) the computing is with a different exaggeration factor for each of multiple ones of the video frames, and (b) the different exaggeration factor is based on one or more parameters of the ...

Подробнее
01-02-2018 дата публикации

GRAPHICS PROCESSING SYSTEMS

Номер: US20180033191A1
Принадлежит: ARM LIMITED

In a graphics processing system, a bounding volume () representative of the volume of all or part of a scene to be rendered is defined. Then, when rendering an at least partially transparent object () that is within the bounding volume () in the scene, a rendering pass for part or all of the object () is performed in which the object () is rendered as if it were an opaque object. In the rendering pass, for at least one sampling position () on a surface of the object (), the colour to be used to represent the part of the refracted scene that will be visible through the object () at the sampling position () is determined by using a view vector () from a viewpoint position () for the scene to determine a refracted view vector () for the sampling position (), determining the position on the bounding volume () intersected by the refracted view vector (), using the intersection position () to determine a vector () to be used to sample a graphics texture that represents the colour of the surface of the bounding volume () in the scene, and using the determined vector () to sample the graphics texture to determine a colour for the sampling position () to be used to represent the part of the refracted scene that will be visible through the object () at the sampling position () and any other relevant information encoded in one or more channels of the texture. 1. A method of operating a graphics processing system when rendering a scene for output , in which a bounding volume representative of the volume of all or part of the scene to be rendered is defined; the method comprising:when rendering an at least partially transparent object that is within the bounding volume in the scene:performing a rendering pass for some or all of the object in which the object is rendered as if it were an opaque object; andin the rendering pass: using a view vector from a viewpoint position for the scene to determine a refracted view vector for the sampling position;', 'determining the position on ...

Подробнее
17-02-2022 дата публикации

System and method for triggering animated paralingual behavior from dialogue

Номер: US20220051463A1
Принадлежит: Jali Inc

A system and method for triggering animated paralingual behavior from dialogue. The method including: receiving a corpus of dialogue including a plurality of samples, each sample including dialogue with aligned sequences of phonemes or sub-phonemes; extracting properties of measured quantities for each sample; generating a statistical profile by statistically classifying the extracted properties as a function of each sequence of phonemes or sub-phonemes; receiving a stream of phonemes or sub-phonemes and triggering paralingual behavior when the properties of any of the phonemes or sub-phonemes deviate from the statistical profile beyond a predetermined threshold; outputting the triggered paralingual behavior for animation.

Подробнее
30-01-2020 дата публикации

Systems and methods for multisensory semiotic communications

Номер: US20200034025A1
Принадлежит: Individual

The present disclosure relates generally to systems and methods for receiving visual and communication inputs, and generating, modifying, and outputting multisensory semiotic communications. The multisensory semiotic communications can include an avatar, a dynamic image, an expressed phrase, and a visual text. The multisensory semiotic communications can be modified based on one or more customization selections. The customization selections can include a gender selection, an age selection, an emotion selection, a race selection, a location selection, a nationality selection, and a language selection.

Подробнее
31-01-2019 дата публикации

Systems and methods for real-time complex character animations and interactivity

Номер: US20190035129A1
Принадлежит: Baobab Studios Inc

Systems, methods, and non-transitory computer-readable media can identify a virtual deformable geometric model to be animated in a real-time immersive environment. The virtual deformable geometric model comprises a virtual model mesh comprising a plurality of vertices, a plurality of edges, and a plurality of faces. The virtual model mesh is iteratively refined in one or more iterations to generate a refined mesh. Each iteration of the one or more iterations increases the number of vertices, the number of edges, and/or the number of faces. The refined mesh is presented during real-time animation of the virtual deformable geometric model within the real-time immersive environment.

Подробнее
31-01-2019 дата публикации

Systems and methods for real-time complex character animations and interactivity

Номер: US20190035130A1
Принадлежит: Baobab Studios Inc

Systems, methods, and non-transitory computer-readable media can receive virtual model information associated with a virtual deformable geometric model. The virtual model information comprises a complex rig comprising a plurality of transforms and a first plurality of vertices defined by a default model, and a simplified rig comprising a second plurality of transforms and a second plurality of vertices. The second plurality of vertices correspond to the first plurality of vertices defined by the default model. The simplified rig and the complex rig are deformed based on an animation to be applied to the virtual deformable geometric model. A set of offset data is calculated. The set of offset data comprises, for each vertex in the first plurality of vertices, an offset between the vertex and a corresponding vertex in the second plurality of vertices.

Подробнее
31-01-2019 дата публикации

GENERATING EFFICIENT, STYLIZED MESH DEFORMATIONS USING A PLURALITY OF INPUT MESHES

Номер: US20190035151A1
Автор: Wampler Kevin
Принадлежит:

The present disclosure includes methods and systems for manipulating digital models based on user input. In particular, disclosed systems and methods can generate modified meshes in real time based on a plurality of input meshes and user manipulation of one or more control points. For example, one or more embodiments of the disclosed systems and methods generate modified meshes from a plurality of input meshes based on a combined shape-space, deformation interpolation measure. Moreover, in one or more embodiments, the disclosed systems and methods utilize an as-rigid-as-possible-deformation measure to combine input meshes into a modified mesh. Further, the disclosed systems and methods can variably combine input shapes over different portions of a modified mesh, providing increased expressiveness while reducing artifacts and increasing computing efficiency. 1. A computer-implemented method for generating an enhanced digital mesh through different combinations of a set of input digital meshes using combined shape-space energy interpolation measures , the method comprising:receiving an input to manipulate a digital model defined by a mesh of vertices, the input comprising an indication of a movement of a control point of the digital model to a new location;identifying a plurality of input meshes of the digital model, each input mesh comprising the mesh of vertices in a different configuration; generating a first portion of the modified mesh utilizing a first combination of the plurality of input meshes based on a first combined shape-space energy interpolation measure; and', 'generating a second portion of the modified mesh utilizing a second combination of the plurality of input meshes based on a second combined shape-space energy interpolation measure, wherein the first combination is different than the second combination., 'generating a modified mesh based on the plurality of input meshes and the movement of the control point to the new location by2. The method of ...

Подробнее
04-02-2021 дата публикации

Augmented reality system capable of manipulating an augmented reality object

Номер: US20210034870A1
Автор: Tae Jin HA
Принадлежит: Virnect Inc

An augmented reality system according to the present invention comprises a mobile terminal which, in displaying a 3D virtual image on a display, displays a dotted guide along the boundary of characters displayed on the display and when handwriting is detected along the dotted guide, recognizes the characters and displays a virtual object corresponding to the content of the characters, wherein, if the virtual object is touched, a pre-configured motion of the virtual object corresponding to the touched area is reproduced.

Подробнее
04-02-2021 дата публикации

ANIMATED MONITOR AND CONTROL FOR AUTOMATED BAGGAGE HANDLING SYSTEM AND RELATED METHODS

Номер: US20210035345A1
Принадлежит:

A system may include a plurality of continuous conveyors configured for moving objects throughout a facility, and a plurality of sensors positioned at different locations within the facility for collecting telemetry data associated with the travel of the objects along the conveyors. The system may also include a computing device configured to store the telemetry data from the sensors in a database, generate an animation of the conveyors and the objects traveling along the conveyors within the facility based upon the telemetry data stored in the database, and play the animation on a display via a graphical user interface (GUI). 1. A system comprising:a plurality of continuous conveyors configured for moving objects throughout a facility;a plurality of sensors positioned at different locations within the facility for collecting telemetry data associated with the travel of the objects along the conveyors; and store the telemetry data from the sensors in a database,', 'generate an animation of the conveyors and the objects traveling along the conveyors within the facility based upon the telemetry data stored in the database, and', 'play the animation on a display via a graphical user interface (GUI)., 'a computing device configured to'}2. The system of wherein the computing device is configured to generate and display the animation to simulate real-time movement of the objects traveling along the conveyors.3. The system of wherein the computing device is further configured to detect errors based upon the sensor telemetry data claim 1 , and generate the animation for corresponding locations within the facility at which the errors occur responsive to the error detections.4. The system of wherein the computing device is configured to generate the animation from user-selectable virtual camera views.5. The system of wherein a plurality of closed circuit television (CCTV) cameras are positioned within the facility; and wherein the computing device is further configured to ...

Подробнее
04-02-2021 дата публикации

Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium

Номер: US20210035346A1
Автор: CHEN Yi, Liu Ang
Принадлежит:

A multi-plane model animation interaction method, apparatus, and device for augmented reality, and a storage medium are provided. The method includes: acquiring a video image of a real environment; recognizing multiple real planes in the real environment by computing the video image; arranging a virtual object corresponding to the model on one of the multiple real planes; and generating, based on the multiple recognized real planes, an animation track of the virtual object among the multiple real planes. In this method, the animation track of the virtual object is generated based on the real planes recognized from the real environment, such that animation effects of the virtual object can be associated with the real environment, thereby enhancing real sensory experience of users. 1. A multi-plane model animation interaction method for augmented reality , comprising:acquiring a video image of a real environment;recognizing a plurality of real planes in the real environment by computing the video image;arranging a virtual object corresponding to the model on one of the plurality of real planes; andgenerating, based on the plurality of recognized real planes, an animation track of the virtual object among the plurality of real planes.2. The multi-plane model animation interaction method for augmented reality according to claim 1 , wherein the recognizing the plurality of real planes in the real environment by computing the video image comprises:recognizing all planes in the video image at one time; orsuccessively recognizing planes in the video image; orrecognizing required planes based on an animation requirement of the virtual object.3. The multi-plane model animation interaction method for augmented reality according to claim 1 , wherein the recognizing the plurality of real planes in the real environment by computing the video image comprises:detecting a plane pose and a camera pose in a world coordinate system by using an SLAM algorithm.4. The multi-plane model ...

Подробнее
04-02-2021 дата публикации

INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF DISPLAY APPARATUS

Номер: US20210035363A1
Автор: SUGAYA Satoshi
Принадлежит:

An information processing apparatus includes a display controller configured to output display information to be displayed on a display apparatus, and an analysis unit. The display apparatus includes an operation display area displaying an operation of a robotic system based on robot control data and an information display area displaying information related to an operation parameter of the robotic system in a time-series manner based on the robot control data. The analysis unit is configured to analyze the operation parameter to specify a warning event. The display controller displays the wanting event specified by the analysis unit in the operation display area and the information display area in association with each other. 1. An information processing apparatus comprising:a display controller configured to output display information to be displayed on a display apparatus, the display apparatus having an operation display area displaying an operation of a robotic system which is simulatively operated within a virtual environment based on robot control data and an information display area displaying information related to an operation parameter of the robotic system in a time-series manner based on the robot control data; andan analysis unit configured to analyze the operation parameter to specify a warning event,wherein the display controller displays the warning event specified by the analysis unit in the operation display area and the information display area in association with each other.28.-. (canceled) The present invention relates to an information processing apparatus outputting display information to a display apparatus and a control method of the display apparatus.In a case where a work is conducted by using a robot, an obstacle such as a tool, an adjacent robot and a wall exist around the robot in general. Therefore, there is a possibility that the robot causes a trouble such as interfere, collision and contact with such obstacles. It is not easy to ...

Подробнее