Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 4807. Отображено 100.
09-02-2012 дата публикации

Device, method for displaying a change from a first picture to a second picture on a display, and computer program product

Номер: US20120036483A1
Принадлежит: INFINEON TECHNOLOGIES AG

A device is described having a memory storing data specifying a change animation between pictures to be displayed successively on the display, a setting circuit configured to store a setting specifying that a change animation between pictures to be displayed successively on the display is to be carried out in accordance with the specification of the change animation given by the data, a display controller configured to control a display to display a first picture, a detector configured to detect an event which triggers that a second picture is to be displayed on the display, a determination circuit configured to read the setting and to determine, based on the setting, a change animation between the first picture and the second picture, wherein the display controller is configured to control the display to display the change animation, and, after the change animation, to display the second picture.

Подробнее
29-03-2012 дата публикации

System and method for motion editing multiple synchronized characters

Номер: US20120075349A1
Автор: Jehee Lee, Manmyung Kim
Принадлежит: SNU R&DB FOUNDATION

Disclosed are a system and a method for motion editing multiple synchronized characters. The motion editing system comprises: a Laplacian motion editor which edits a spatial route of inputted character data according to user conditions, and processes the distortion of the interaction time; and a discrete motion editor which applies a discrete transformation while the character data is processed.

Подробнее
07-06-2012 дата публикации

Controlling runtime execution from a host to conserve resources

Номер: US20120139929A1
Принадлежит: Microsoft Corp

A runtime management system is described herein that allows a hosting layer to dynamically control an underlying runtime to selectively turn on and off various subsystems of the runtime to save power and extend battery life of devices on which the system operates. The hosting layer has information about usage of the runtime that is not available within the runtime, and can do a more effective job of disabling parts of the runtime that will not be needed without negatively affecting application performance or device responsiveness. The runtime management system includes a protocol of communication between arbitrary hosts and underlying platforms to expose a set of options to allow the host to selectively turn parts of a runtime on and off depending on varying environmental pressures. Thus, the runtime management system provides more effective use of potentially scarce power resources available on mobile platforms.

Подробнее
14-06-2012 дата публикации

Animation control apparatus, animation control method, and non-transitory computer readable recording medium

Номер: US20120147013A1
Принадлежит: Panasonic Corp

An animation control apparatus has: an interpolation component information creating unit ( 14 ) that interpolates first and second component information included respectively in first and second keyframe information acquired by an animation controller ( 13 ), to create interpolation component information expressing an interpolation screen component; a rendering time period computing unit ( 16 ) that calculates a rendering time period required in a rendering process for displaying the interpolation screen component on a display unit ( 2 ); a rendering determination unit ( 15 ) that determines, based on the rendering time period, whether or not the rendering process for displaying the interpolation screen component on the display unit ( 2 ) is completed by a second rendering start time included in the second keyframe information; and a display controller ( 18 ) that waits without performing the rendering process for displaying the interpolation screen component on the display unit ( 2 ), when determination is made that the rendering process is not completed by the second rendering start time, and then starts a rendering process for displaying a screen component expressed by the second component information, on the display unit ( 2 ), when the second rendering start time is reached.

Подробнее
20-09-2012 дата публикации

Animation rendering device, animation rendering program, and animation rendering method

Номер: US20120236007A1
Принадлежит: Panasonic Corp

An interpreter 11 outputs a drawing request to an animation controller upon interpreting an animation drawing instruction defined with use of a predetermined script variable. The animation controller 12 animation-displays a GUI by executing a program module described in a native language. Upon receiving the drawing request, the animation controller 12 converts the script variable into a native variable in the form of the native language, and animation-displays the GUI while sequentially updating the native variable.

Подробнее
28-03-2013 дата публикации

Page Switching Method And Device

Номер: US20130076758A1

A page switching method and device. The method includes: displaying current message page; when detecting a touch operation, drawing a page-turning animation according to the touch operation, and playing the page-turning animation; and when the touch operation stops, displaying an adjacent message page. 1. A page switching method , comprising:displaying current message page;when detecting a touch operation, drawing a page-turning animation according to the touch operation, and playing the page-turning animation; andwhen the touch operation stops, displaying an adjacent message page.2. The method according to claim 1 , wherein drawing the page-turning animation according to the touch operation comprises:determining whether sliding speed of the touch operation is larger than a preset threshold;when the sliding speed of the touch operation is larger than the preset threshold, drawing a fast page-turning animation to imitate a fast book page-turning; andwhen the sliding speed of the touch operation is equal to or smaller than the preset threshold, drawing an in-page-turning animation to imitate a slow book page-turning.3. The method according to claim 2 , wherein drawing the fast page-turning animation to imitate the fast book page-turning comprises:obtaining a horizontal motion trace of a touch point from one side of a view area to the other side of the view area, according to touch point location and sliding direction of the touch operation; anddrawing the in-page-turning animation from a starting point of the motion trace to an ending point of the motion trace.4. The method according to claim 2 , wherein drawing the in-page-turning animation to imitate the slow book page-turning comprises:computing a first area to display designated part of current message page, a second area to display designated part of an adjacent message page and a third area to display a page reverse side, according to touch point location of the touch operation and paper folding characteristics; ...

Подробнее
28-03-2013 дата публикации

MULTI-LAYERED SLIDE TRANSITIONS

Номер: US20130076759A1
Принадлежит: MICROSOFT CORPORATION

Architecture that enhances the visual experience of a slide presentation by animating slide content as “actors” in the same background “scene”. This is provided by multi-layered transitions between slides, where a slide is first separated into “layers” (e.g., with a level of transparency). Each layer can then be transitioned independently. All layers are composited together to accomplish the end effect. The layers can comprise one or more content layers, and a background layer. The background layer can further be separated into a background graphics layer and a background fill layer. The transition phase can include a transition effect such as a fade, a wipe, a dissolve effect, and other desired effects. To provide the continuity and uniformity of presentation the content on the same background scene, a transition effect is not applied to the background layer. 1. A computer-implemented slide processing system , comprising: displaying a first content within a first layer of a first slide;', 'displaying a second content within a second layer of the first slide;', 'receiving a first transition effect for the first content;', 'receiving a second transition effect for the second content, the second transition effect different than the first transition effect; and', applying the first transition effect to the first content within the first layer; and', 'applying the second transition effect to the second content within the second layer; and, 'during a transition phase from the first slide to a next slide], 'a transition component that performsa processor configured to execute computer-executable instructions associated with the transition component.2. The system of claim 1 , wherein the first layer is at a different depth than the second layer.3. The system of claim 1 , wherein the first transition effect comprises display of animated motion of the first content.4. The system of claim 1 , wherein the second transition effect comprises display of a fade effect claim 1 , a ...

Подробнее
28-03-2013 дата публикации

METHOD OF INTERACTION OF VIRTUAL FACIAL GESTURES WITH MESSAGE

Номер: US20130080147A1
Принадлежит:

It is claimed the method of interaction of virtual facial gestures with message wherein at replacing voice message (hereinafter referred to as VM1) which is pronounced or has been pronounced by displayed person on fully or partially another voice message (hereinafter referred to as VM2) instead of displaying part of face of specified person or part of face of specified person and at least one subject and/or at least one part of at least one subject from subjects, wholly or partially located on the face of specified person virtual facial gestures, which correspond to facial gestures at pronouncing VM2, are displayed. 1. A method of interaction of virtual facial gestures with message wherein at replacing voice message (hereinafter referred to as VM1) which is pronounced or has been pronounced by displayed person on fully or partially another voice message (hereinafter referred to as VM2) instead of displaying part of face of specified person or part of face of specified person and at least one subject and/or at least one part of at least one subject from subjects , wholly or partially located on the face of specified person , virtual facial gestures , which correspond to facial gestures at pronouncing VM2 , are displayed.2. The method according to claim 1 , wherein VM2 is a translation of VM1 from one speech language to another speech language.3. The method according to claim 1 , wherein VM2 is pronounced after pronouncing VM1 or VM2 is pronounced partially during pronunciation of VM1 and partially after pronunciation of VM1 claim 1 , or VM2 is pronounced during pronunciation of VM1.5. The method according to claim 1 , wherein at pronouncing VM 1 or VM 2: a) are additionally displayed person who is pronouncing or has pronounced VM 1 claim 1 , and/or b) at another display are in addition displayed person who is pronouncing or has pronounced VM 1 claim 1 , herewith to specified additionally displayed person and/or to specified person displayed at another display virtual ...

Подробнее
04-04-2013 дата публикации

METHOD OF RENDERING A SET OF CORRELATED EVENTS AND COMPUTERIZED SYSTEM THEREOF

Номер: US20130083036A1
Принадлежит: Hall of Hands Limited

An automated rendering system for creating a screenplay or a transcript is provided that includes an audio/visual (A/V) content compositor and renderer for composing audio/visual (A/V) content made up of at clips and animations, and at least one of: back ground music, still images, or commentary phrases. A transcript builder is provided to build a transcript. The transcript builder utilizes data in various forms including user situational inputs, predefined rules and scripts, game action text, logical determinations and intelligent assumptions to generate a transcript to produce the A/V content of the screenplay or the transcript. A method is also provided for rendering an event that include receiving data with a request from a user to generate an audio/visual (A/V) presentation based on the event using the system. Ancillary data input is provided as a set of rules that influence or customize the outcome of the screenplay. 1. An automated rendering system for creating a screenplay or a transcript comprising:an audio/visual (A/V) content compositor and renderer for composing (A/V) content made up of clips or animations, and at least one of : back ground music, still images, or commentary phrases; anda transcript builder to build a transcript, said transcript builder utilizes data in various forms including user situational inputs, predefined rules and scripts, action text, logical determinations and intelligent assumptions to generate said transcript to produce said A/V content of the screenplay or the transcript.2. The system of wherein said A/V content is formatted as a movie for streaming over a computer network.3. The system of wherein said movie comprises a sequence of said clips claim 2 , the sequence defined by both an event history and high level story logic provided by data that forms a transcript of the event.4. The system of wherein said high level story logic includes an event or simulation specific decisions and rules.5. The system of wherein said clips ...

Подробнее
18-04-2013 дата публикации

System For Creating A Visual Animation Of Objects

Номер: US20130093775A1
Принадлежит:

A system for creating visual animation of objects which can be experienced by a passenger located within a moving vehicle is provided. The system includes: a plurality of objects being placed along a movement path of the vehicle; a plurality of sensors being assigned to the plurality of objects and being arranged such along the movement path that the vehicle actuates the sensors when moving along the movement path; and a plurality of highlighting devices being coupled to the plurality of sensors and being controlled by the sensors such that, in accordance with sensor actuations triggered by the movement of the vehicle, a) only one of the plurality of objects is highlighted by the highlighting devices to the passenger at one time, and b) the objects are highlighted to the passenger in such a sequence that the passenger visually experiences an animation of the objects. 1. A system for creating a visual animation of objects which can be experienced by a passenger located within a moving vehicle , the system comprising:a plurality of objects being placed along a movement path of the vehicle,a plurality of sensors being assigned to the plurality of objects and being arranged such along the movement path that the vehicle actuates the sensors when moving along the movement path,a plurality of highlighting devices being coupled to the plurality of sensors and being configured such that, in accordance with sensor actuations triggered by the movement of the vehicle,a) only one of the plurality of objects is highlighted by the highlighting devices to the passenger at one time,b) the objects are highlighted to the passenger in such a sequence that the passenger visually experiences an animation of the objects.2. The system according to claim 1 , wherein the sensors are light sensors claim 1 , infrared sensors claim 1 , pressure sensors or acoustic sensors.3. The system according to claim 2 , wherein claim 2 , to each of the objects claim 2 , a first sensor is respectively ...

Подробнее
25-04-2013 дата публикации

System and method of producing an animated performance utilizing multiple cameras

Номер: US20130100141A1
Принадлежит: Jim Henson Co

A real-time method for producing an animated performance is disclosed. The real-time method involves receiving animation data, the animation data used to animate a computer generated character. The animation data may comprise motion capture data, or puppetry data, or a combination thereof. A computer generated animated character is rendered in real-time with receiving the animation data. A body movement of the computer generated character may be based on the motion capture data, and a head and a facial movement are based on the puppetry data. A first view of the computer generated animated character is created from a first reference point. A second view of the computer generated animated character is created from a second reference point that is distinct from the first reference point. One or more of the first and second views of the computer generated animated character are displayed in real-time with receiving the animation data.

Подробнее
02-05-2013 дата публикации

Layering animation properties in higher level animations

Номер: US20130106866A1
Принадлежит: Microsoft Corp

Embodiments are directed to rendering animations in a multi-layered animation system and to rendering an element with an animation that uses multiple levels of animation properties. In one scenario, a computer system establishes an operating system (OS)-specified animation value for at least one property of a user interface (UI) element that is to be animated. The computer system receives a user-specified animation value for at least one property of the UI element that is to be animated and determines, based on the UI element property, how to combine the OS-specified animation value and the user-specified animation value. The computer system then combines the OS-specified animation value and the user-specified value for the UI element in the determined manner and renders the animation for the element using the combined animation values.

Подробнее
02-05-2013 дата публикации

METHOD AND APPARATUS FOR GENERATING AN AVATAR

Номер: US20130106867A1
Принадлежит:

Disclosed herein are a method and an apparatus for creating an avatar. The method for creating an avatar according to an exemplary embodiment of the present invention includes receiving information on an appearance of an object to be created into an avatar; generating avatar appearance type metadata using the information on the appearance; creating the avatar using the avatar appearance type metadata, wherein the avatar appearance type metadata include at least one of skin information, hair information, nails information, and teeth information. The exemplary embodiments of the present invention can create the avatar capable of facilitating the differentiation with another avatar while more approximating the reality, by adding more detailed data in addition to the data for the existing appearance so as to generate the avatar. 1. A method for creating an avatar , comprising:receiving information on an appearance of an object to be created into an avatar;generating avatar appearance type metadata using the information on the appearance; andcreating the avatar using the avatar appearance type metadata,wherein the avatar appearance type metadata include at least one of skin information, hair information, nails information, and teeth information.2. The method of claim 1 , wherein the skin information includes at least one of face skin information and body skin information.3. The method of claim 2 , wherein the face skin information includes at least one of skin pigment information claim 2 , skin ruddiness information claim 2 , skin rainbow color information claim 2 , facial definition information claim 2 , rosy complexion information claim 2 , freckles information claim 2 , wrinkles information claim 2 , and face skin type information.4. The method of claim 2 , wherein the body skin information includes at least one of skin pigment information claim 2 , body freckles information claim 2 , and wrinkles information.5. The method of claim 1 , wherein the hair information ...

Подробнее
02-05-2013 дата публикации

Aliasing of live elements in a user interface

Номер: US20130106885A1
Принадлежит: Microsoft Corp

Embodiments are directed to maintaining layout properties when aliasing a live element and to independently inheriting animation properties using aliases. In one scenario, a computer system generates aliases for a live element displayed in a user interface (UI). The aliases represent the live element in a UI layout which includes live element properties that are inherited hierarchically. The computer system removes the live element from the UI such that the live element is no longer visible on the UI, and integrates the generated aliases into the UI layout. The aliases inherit properties of the UI layout depending on where in the layout the alias was attached. The computer system then initiates an animation for the live element using the aliases which are integrated into the UI layout according to the properties inherited from the position of the aliases in the layout.

Подробнее
09-05-2013 дата публикации

User Interface for Controlling Animation of an Object

Номер: US20130113807A1
Принадлежит: Apple Inc.

A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions. 1. A computer-implemented method for animating an object , wherein animating the object comprises creating one or more duplicates of the object and animating the one or more duplicates according to a range in which the one or more duplicates moves , the method comprising: a control area comprising an ellipse; and', 'a first user-manipulable control element located on the circumference of the ellipse, wherein the first user-manipulable control element comprises a first point and a second point, and wherein the first point and the second point together specify a sector of the ellipse, and wherein a size of the sector specifies a size of the range, and wherein a position of the sector specifies a location of the range;, 'presenting a user interface comprisingreceiving user input via the first user-manipulable control element, the input comprising dragging the first point in order to set the sector's size and the sector's position; andanimating the object based on the received input.2. The method of claim 1 , wherein animating the object further comprises animating the one or more duplicates according to a speed with which the one or more duplicates moves claim 1 , and wherein the user interface further comprises a second user-manipulable control element located ...

Подробнее
16-05-2013 дата публикации

Animation creation and management in presentation application programs

Номер: US20130120400A1
Принадлежит: Microsoft Corp

An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.

Подробнее
16-05-2013 дата публикации

Animation creation and management in presentation application programs

Номер: US20130120403A1
Принадлежит: Microsoft Corp

An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.

Подробнее
16-05-2013 дата публикации

Animation creation and management in presentation application programs

Номер: US20130120405A1
Принадлежит: Microsoft Corp

An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.

Подробнее
06-06-2013 дата публикации

OPERATION SEQUENCE DISPLAY METHOD AND OPERATION SEQUENCE DISPLAY SYSTEM

Номер: US20130141440A1
Принадлежит: HONDA MOTOR CO., LTD.

Disclosed are an operation sequence display method and an operation sequence display system, wherein operation scenes to attach or remove one or a plurality of components are displayed by switching the scenes. And in at least one operation scene, the attachment or removal target components are displayed in a different manner from other components by changing gray scales using a single color, marking displays for emphasizing operation portions of the target components or the moving directions of the target components in the screen are blinked at a constant interval, and after the marking displays are blinked, the operations to the operation portions or the movements of the target components are displayed by animation, and displays regarding the operations to the operation portions or the movements of the target components are performed at a constant rhythm. 1. An operation sequence display method for sequentially displaying a series of operation procedures to remove , install , or remove and install a plurality of target components included in an apparatus , on a monitor controlled by a computer , comprising:displaying, on the monitor, at least one operation scene showing installation or removal of one or more of the target components;displaying, in the operation scene, the target components and other components differently in monochromatic shades;blinking, at predetermined intervals in the operation scene, a displayed marking highlighting a position of an operation spot on the one or more target components or a direction along which to move the one or more target components or the operation spot; andafter the displayed marking is blinked, displaying, in the operation scene, an animation showing an operation on the operation spot or a movement of the one or more target components or the operation spots;wherein the operation on the operation spots or the movement of the one or more target components or the operation spot is displayed in a constant rhythm.2. (canceled) ...

Подробнее
27-06-2013 дата публикации

Creating Animations

Номер: US20130162653A1
Принадлежит: MICROSOFT CORPORATION

Animation creation is described, for example, to enable children to create, record and play back stories. In an embodiment, one or more children are able to create animation components such as characters and backgrounds using a multi-touch panel display together with an image capture device. For example, a graphical user interface is provided at the multi-touch panel display to enable the animation components to be edited. In an example, children narrate a story whilst manipulating animation components using the multi-touch display panel and the sound and visual display is recorded. In embodiments image analysis is carried out automatically and used to autonomously modify story components during a narration. In examples, various types of handheld view-finding frames are provided for use with the image capture device. In embodiments saved stories can be restored from memory and retold from any point with different manipulations and narration. 1. An animation creation apparatus comprising:a processor;an image capture device in communication with the processor and arranged to capture images of animation components;a memory arranged to store the animation components; anda hand-held view-finding frame arranged to support the image capture device using a fixing arranged to detachably fix the image capture device to the frame, the frame comprising an identifier;wherein the image capture device is arranged to associate the identifier with the images of the animation components.2. An apparatus as claimed in wherein the memory is arranged to store animation components selected from any of: images of objects claim 1 , images of environments claim 1 , sequences of images of objects claim 1 , or sequences of images of environments.3. An apparatus as claimed in further comprising:a multi-touch panel display controlled by the processor;a microphone in communication with the processor and arranged to capture sound during an animation narration; anda user interface engine arranged ...

Подробнее
27-06-2013 дата публикации

System and method for hiding latency in computer software

Номер: US20130162654A1
Автор: Andrew Borovsky
Принадлежит: Individual

A system and method hides latency in the display of a subsequent user interface by animating the exit of the current user interface and animating the entrance of the subsequent user interface, causing continuity in the display of the two user interfaces. During either or both animations, information used to produce the user interface, animation of the entrance of the subsequent user interface, or both may be retrieved or processed or other actions may be performed.

Подробнее
18-07-2013 дата публикации

Five-Dimensional Occlusion Queries

Номер: US20130181991A1
Принадлежит: Intel Corp

A standard occlusion query (OQ) may be generalized to five dimensions, which can be used for motion blurred, defocused, occlusion culling. As such, the occlusion query concept is generalized so that it can be used within 5D rasterization, which is used for rendering of motion blur and depth of field. For 5D rasterization, occlusion culling may be done with OQs as well, applied to solve other rendering related problems.

Подробнее
01-08-2013 дата публикации

Gaming Machine Transitions

Номер: US20130196758A1
Автор: Drazen Lenger
Принадлежит: Individual

A graphics system for changing images on a gaming machine display, having a transition library of transition types, a graphics engine and a control means. The graphics engine applies a selected transition type from the transition library to at least one of at least two images for determining the way in which a substitution of one of the images by the other of the images occurs and initialises transition data for effecting an incremental substitution of the one image by the other image. The control means modifies the transition data such that, when the selected transition type is being effected, an incremental substitution of at least a part of the one image by the other image occurs serially until the one image has been substituted by the other image on the gaming machine display.

Подробнее
08-08-2013 дата публикации

METHOD AND APPARATUS FOR PLAYING AN ANIMATION IN A MOBILE TERMINAL

Номер: US20130201194A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method and apparatus are provided for playing an animation in a mobile terminal. The method includes displaying content; determining an object of an animation from the content; determining whether an interaction event occurs while displaying the content; and playing an animation of the determined object, when the interaction event occurs. 1. A method of playing an animation in a mobile terminal , the method comprising:displaying content;determining an object of an animation from the content;determining whether an interaction event occurs while displaying the content; andplaying an animation of the determined object, when the interaction event occurs.2. The method of claim 1 , wherein playing the animation of the determined object comprises:confirming attribute information of the determined object;playing a dynamic animation in which a state of the determined object is changed, when the determined object is dynamically set; andplaying a static animation in which the state of the determined object is not changed when the determined object is statically set.3. The method of claim 2 , wherein playing the dynamic animation comprises:playing a limited motion animation in which at least one of a location and a direction of the determined object is restrictively changed according to a set constraint, when the constraint is set to the determined object; andplaying a free motion animation in which the location and the direction of the determined object change without constraint, when the constraint is not set to the determined object.4. The method of claim 2 , further comprising receiving the attribute information of the determined object from a network.5. The method of claim 2 , wherein confirming the attribute information of the determined object comprises determining an attribute of the determined object based on additional information of the determined object.6. The method of claim 5 , wherein determining the attribute of the determined object comprises determining a ...

Подробнее
29-08-2013 дата публикации

SYSTEM AND METHOD FOR CREATING AND DISPLAYING AN ANIMATED FLOW OF TEXT AND OTHER MEDIA FROM AN INPUT OF CONVENTIONAL TEXT

Номер: US20130222396A1
Принадлежит:

A system and method for generating and displaying text on a screen as an animated flow from a digital input of conventional text. The Invention divides text into short-scan lines of coherent semantic value that progressively animate from invisible to visible and back to invisible. Multiple line displays are frequent. The effect is aesthetically engaging, perceptually focusing, and cognitively immersive. The reader watches the text like watching a movie. The Invention may exist in whole or in part as a standalone application on a specific screen device. The Invention includes a manual authoring tool that allows the insertion of non-text media such as sound, image, and advertisements. 1. A method for displaying text on a screen , comprising:displaying the text as an animated flow generated from a digital input of conventional text.2. A method according to wherein the text is displayed by a software application within a screen device as a sequence of Clusters of short-scan Line Segments claim 1 , with the Line Segments in each Cluster progressively animating from invisible to visible until the complete Cluster of Line Segments is displayed simultaneously on the screen and remains displayed for a sufficient duration for all the Line Segments to be read by a reader before the entire Cluster fades back to invisible claim 1 , either in unison or in a progression claim 1 , thereafter to be proceeded by the next Cluster in the sequence until all Clusters comprising the text have been displayed.3. A method according to comprising:(a) dividing the text into single-scan or short-scan Line Segments of at least one word; and (i) each Line Segment, when displayed, progressively animates from invisible to visible to display at a discreet location on the screen; and', '(ii) each successive Line Segment in the Cluster, when displayed, progressively animates from invisible to visible after the previous Line Segment has become visible or is nearly visible; and', '(iii) each successive ...

Подробнее
19-09-2013 дата публикации

METHOD FOR DISPLAYING AN ITEM ON A DISPLAY UNIT

Номер: US20130241899A1
Автор: Kienzl Thomas
Принадлежит:

By a method for displaying an item on a display unit (), the following steps will be suggested:—arranging a first plan () of a first virtual space in a navigation area () of a navigation plane,—imaging the navigation area () by means of an optical registration system () of a computer-supported interface system (),—determining the position of a calibration arrangement of the first plan (), and calibrating the plan coordinate system of the image of the first plan () in the computer-supported interface system () on the basis of the calibration arrangement,—assigning the first virtual space to the navigation area () in consideration of the plan coordinate system,—determining the coordinates, including position and orientation, of a manually guidable object () in the first plane () by means of the computer-supported interface system (), the manually guidable object () having at least one optical marking,—assigning the coordinates of the manually guidable object () in the first plan () to coordinates of a virtual observer in the first virtual space, and—displaying the field of vision of the observer in the virtual space on the display unit (). 111.-. (canceled)12. A method for displaying an object on a display unit , comprising:imaging a navigation area by an optical registration system of a computer-supported interface system;assigning a first virtual space to the navigation area;determining coordinates, including position and orientation, of a manually guidable object in the navigation area by the computer-supported interface system, the manually guidable object having at least one optical marking;assigning the coordinates of the manually guidable object in the navigation area to coordinates of a virtual observer in the first virtual space;displaying a field of vision of the observer in the first virtual space on the display unit,assigning a further virtual space, instead of the first virtual space, to the navigation area after determination of the manually guidable ...

Подробнее
03-10-2013 дата публикации

Systems and Methods for Providing An Interactive Avatar

Номер: US20130257876A1
Автор: Davis Paul R.
Принадлежит: Videx, Inc.

Systems and methods are provided for a computer-implemented method of providing an interactive avatar that reacts to a communication from a communicating party. Data from an avatar characteristic table is provided to an avatar action model, where the avatar characteristic table is a data structure stored on a computer-readable medium that includes values for a plurality of avatar personality characteristics. A communication with the avatar is received from the communicating party. A next state for the avatar is determined using the avatar action model, where the avatar action model determines the next state based on the data from the avatar characteristic table, a current state for the avatar, and the communication. The next state for the avatar is implemented, and the avatar characteristic table is updated based on the communication from the communicating party, where a subsequent state for the avatar is determined based on the updated avatar characteristic table. 1. A computer-implemented method of providing an interactive avatar that reacts to a communication from a communicating party , comprising:providing data from an avatar characteristic table to an avatar action model, wherein the avatar characteristic table is a data structure stored on a computer-readable medium that includes values for a plurality of avatar personality characteristics;receiving a communication with the avatar from the communicating party;determining a next state for the avatar using the avatar action model, wherein the avatar action model determines the next state based on the data from the avatar characteristic table, a current state for the avatar, and the communication;implementing the next state for the avatar, wherein the implemented next state is made discernible to the communicating party; andupdating the avatar characteristic table based on the communication from the communicating party, wherein a subsequent state for the avatar is determined based on the updated avatar ...

Подробнее
03-10-2013 дата публикации

METHOD AND APPARATUS FOR ANIMATING STATUS CHANGE OF OBJECT

Номер: US20130257878A1
Автор: JIN Fredrick
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An apparatus and a method for animating a status change of an object on a display. The apparatus for animating a status change of an object includes a display unit, an input unit, one or more processors, a memory, and one or more modules. The display unit displays a status change process of a first object. The input unit receives a status change instruction for the first object. One or more modules are stored in the memory and executed by the one or more processors. The module displays a second object in response to the status change instruction to animate a status change of the first object. 1. An electronic device for animating a status change on an object in an electronic device , the device comprising:a display unit configured to display a status change animation of a first object;an input unit configured to receive a status change instruction on the first object;a memory unit configured to store at least one module; andone or more processors configured to execute the at least one module,wherein the at least one module is configured to perform a status change animation on the first object using a second object.2. The electronic device of claim 1 , wherein the module performs the status change animation concurrently with a movement of at least a portion of the second object.3. The electronic device of claim 1 , wherein the status change instruction comprises at least one of deletion claim 1 , transmission claim 1 , sharing claim 1 , and storing.4. The electronic device of claim 3 , wherein if the status change instruction is sharing claim 3 , the at least one module is configured to display the second object to share data with the first object.5. The electronic device of claim 3 , wherein if the status change instruction is deleting claim 3 , the at least one module is configured to reduce opacity of the first object as the second object approaches to the first object.6. The electronic device of claim 3 , wherein if the status change instruction is transmitting ...

Подробнее
03-10-2013 дата публикации

GEOLOGICAL ANIMATION

Номер: US20130257879A1
Автор: Kurtenbach Bernd
Принадлежит: SCHLUMBERGER TECHNOLOGY CORPORATION

An example embodiment of the present disclosure may include one or more of a method, computing device, computer-readable medium, and system for animating geology. An example embodiment of a method may include providing a geological model that includes a first object and a second object, wherein the first and second objects comprise geological data relating to a first and second geological time respectively. The method may also include interpolating a property value of the first object and a property value of the second object to produce an interpolated property value. The representation of the interpolated property value may be output along with an animation that comprises the representation of the interpolated property value. 1. A method of generating a geological animation , comprising:providing a geological model comprising a first object and a second object, wherein the first and second objects comprise geological data relating to a first and second geological time respectively;interpolating a property value of the first object and a property value of the second object to produce an interpolated property value;outputting a representation of the interpolated property value; andoutputting an animation that comprises the representation of the interpolated property value.2. The method of claim 1 , wherein the animation further comprises at least one of a representation of a property value of the first object and a representation of a property value of the second object.3. The method of claim 2 , further comprising:performing a graphical interpolation of the representations of the property values of first and second objects;outputting the graphical interpolation.4. The method of claim 2 , further comprising arranging a display order of the first and second representations using a graphical user interface.5. The method of claim 1 , wherein the interpolating further comprises performing a plurality of simulations using the geological model claim 1 , and using the ...

Подробнее
17-10-2013 дата публикации

Creation of Properties for Spans within a Timeline for an Animation

Номер: US20130271473A1
Автор: Jonathan M. Duran
Принадлежит: MOTOROLA MOBILITY LLC

In one embodiment, a method includes receiving an input specifying a keyframe in a first layer included in a master layer to create an animation of a first element in a plurality of elements. The first layer is associated with the first element. A master duration associated with the master layer is determined where the master duration is applied to the plurality of elements. The method determines a keyframe value for the first layer based on the master duration and a property value for the keyframe value for the first layer. Software code is generated specifying the calculated keyframe value and the determined property value, the software code for use in creating the animation of the first element.

Подробнее
31-10-2013 дата публикации

APPARATUS AND METHOD FOR PRODUCING MAKEUP AVATAR

Номер: US20130286036A1
Принадлежит:

An apparatus and method for producing a makeup avatar is disclosed. The apparatus may include a spectrum information metadata generating unit to generate spectrum information metadata, based on skin spectrum information of a user and cosmetics spectrum information related to makeup, a makeup information generating unit to receive, from the user, a makeup avatar to which makeup is applied through a user terminal, and to generate makeup information of the makeup avatar, a control information determining unit to determine control information for controlling a makeup status of the makeup avatar, based on the makeup information and the spectrum information metadata, and a makeup avatar metadata generating unit to generate makeup avatar metadata, based on spectrum information metadata corresponding to the control information and the makeup information. 1. An apparatus for producing a makeup avatar , the apparatus comprising:a spectrum information metadata generating unit to generate spectrum information metadata, based on skin spectrum information of a user and cosmetics spectrum information related to makeup;a makeup information generating unit to receive, from the user, a makeup avatar to which makeup is applied through a user terminal, and to generate makeup information of the makeup avatar;a control information determining unit to determine control information for controlling a makeup status of the makeup avatar, based on the makeup information and the spectrum information metadata; anda makeup avatar metadata generating unit to generate makeup avatar metadata, based on spectrum information metadata corresponding to the control information and the makeup information.2. The apparatus of claim 1 , wherein the spectrum information metadata generating unit generates the spectrum information metadata claim 1 , based on skin spectrum information corresponding to skin information of the user claim 1 , and cosmetics spectrum information related to color information of ...

Подробнее
07-11-2013 дата публикации

METHOD AND SYSTEM FOR ZOOM ANIMATION

Номер: US20130293550A1
Автор: Cardno Andrew John
Принадлежит:

A method of developing from a first animated sequence of images having a first zoom factor a second animated sequence of images having a second zoom factor different to the first zoom factor, the method comprising the steps of: rendering the first animated sequence of images at a first resolution within a first display area, determining an area of the first animated sequence of images that is to be developed as at least part of a second animated sequence of images having the second zoom factor, adjusting the determined area of the first animated sequence of images for display in a second display area, and rendering the second animated sequence of images at the first resolution in the second display area and subsequently changing the resolution of the second animated sequence of images to a second resolution different to the first resolution. 1. A method , implemented on an electronic computing system , of developing from a first animated sequence of images having a first zoom factor a second animated sequence of images having a second zoom factor different to the first zoom factor , the method comprising the steps of:on the electronic computing system, rendering the first animated sequence of images at a first resolution within a first display area;determining an area of the first animated sequence of images that is to be developed as at least part of a second animated sequence of images having the second zoom factor;adjusting the determined area of the first animated sequence of images for display in a second display area; andrendering the second animated sequence of images at the first resolution in the second display area and subsequently changing the resolution of the second animated sequence of images to a second resolution different to the first resolution.2. The method of claim 1 , wherein the first display area and second display area are one and the same.3. The method of claim 1 , wherein the first display area and second display area are different.4. The ...

Подробнее
14-11-2013 дата публикации

Method, apparatus and computer program product for generating animated images

Номер: US20130300750A1
Принадлежит: Nokia Oyj

In accordance with an example embodiment a method, apparatus and computer program product are provided. The method comprises facilitating a selection of a region in a multimedia frame and performing an alignment of multimedia frames occurring periodically at a pre-defined interval in a capture order associated with a plurality of multimedia frames based on the multimedia frame comprising the selected region. The method further comprises computing region-match parameters corresponding to the selected region for the aligned multimedia frames. One or more multimedia frames are selected from among the aligned multimedia frames based on the computed region-match parameters and a multimedia frame is identified from among the selected one or more multimedia frames and multimedia frames neighbouring the one or more selected multimedia frames based on the computed region-match parameters. The multimedia frame is identified for configuring a loop sequence for an animated image.

Подробнее
06-02-2014 дата публикации

Character display device

Номер: US20140035930A1
Автор: Hiroaki SAOTOME
Принадлежит: Square Enix Co Ltd

When a predetermined condition is satisfied in a game and a scenario mode is started, a predetermined scenario is displayed. When an animation in the scenario is completely displayed, a transmission waiting state is started and idling of a character displayed on a display screen is displayed. The idling display is performed based on the pose of the character in the transmission waiting state. The idling display is performed by deciding the angle of a target pose based on difference information of a joint management table.

Подробнее
06-02-2014 дата публикации

TEMPORAL DEPENDENCIES IN DEPENDENCY GRAPHS

Номер: US20140035931A1
Принадлежит: DreamWorks Animation LLC

Systems and processes are described below relating to evaluating a dependency graph having one or more temporally dependent variables. The temporally dependent variables may include variables that may be used to evaluate the dependency graph at a frame other than that at which the temporally dependent variable was evaluated. One example process may include tracking the temporal dirty state for each temporally dependent variable using a temporal dependency list. This list may be used to determine which frames, if any, should be reevaluated when a request to evaluate a dependency graph for a particular frame is received. This advantageously reduces the amount of time and computing resources needed to reevaluate a dependency graph. 1. A method for evaluating a dependency graph having a temporally dependent variable that is used to evaluate the dependency graph for a frame other than that at which the temporally dependent variable was evaluated , the method comprising:receiving, by a processor, a request to evaluate the dependency graph for a requested frame of an animation;identifying a dirty value of the temporally dependent variable for a frame used to evaluate the dependency graph for the requested frame based on a temporal dependency list, wherein the temporal dependency list comprises a list of values of the temporally dependent variable determined by previous evaluations of the dependency graph;evaluating the identified dirty value;updating the list of values of the temporal dependency list based on the evaluation of the identified dirty value; andevaluating at least a portion of the dependency graph at the requested frame based at least in part on the updated list of values of the temporal dependency list.2. The method of claim 1 , wherein the temporal dependency list comprises a temporal dirty flag for each value in the list of values.3. The method of claim 2 , wherein identifying the dirty value comprises parsing the temporal dependency list to identify a ...

Подробнее
13-02-2014 дата публикации

Animation Transitions and Effects in a Spreadsheet Application

Номер: US20140043340A1
Принадлежит: MICROSOFT CORPORATION

Concepts and technologies are described herein for animation transitions and effects in a spreadsheet application. In accordance with the concepts and technologies disclosed herein, a computer system can execute a visualization component. The computer system can detect selection of a scene included in a visualization of spreadsheet data. The computer system also can generate an effect for the scene selected. In some embodiments, the computer system identifies another scene and generates a transition between the scenes. The computer system can output the effect animation and the transition animation. 1. A computer-implemented method for generating an animation in a spreadsheet application , the computer-implemented method comprising performing computer-implemented operations for:detecting, at a computer system executing a visualization component, selection of a scene included in a visualization of spreadsheet data;determining, by the computer system, a duration of the scene based upon a start time of the scene and an end time of the scene;receiving, by the computer system, selection of an effect comprising a visual effect applied during rendering of the scene from a viewpoint from which the scene is rendered;generating, by the computer system, the scene based upon the duration and the effect for the scene; andoutputting, by the computer system, an effect animation corresponding to the effect applied to the scene.2. The method of claim 1 , wherein generating the effect comprisesdetermining an effect type for the scene,determining a duration of the effect and a speed or magnitude of the effect, andgenerating the effect animation based upon the effect type and the duration of he scene, as well as positioning of the viewpoint.3. The method of claim 2 , wherein generating the effect further comprisesdetermining a camera distance for the effect, the camera distance comprising a distance between the viewpoint for the effect animation and a center point of data included in ...

Подробнее
13-02-2014 дата публикации

Generating queries based upon data points in a spreadsheet application

Номер: US20140046923A1
Принадлежит: Microsoft Corp

Concepts and technologies are described herein for generating queries for data points in a spreadsheet application. In accordance with the concepts and technologies disclosed herein, a computer system can execute a visualization component. The computer system can obtain spreadsheet data having records that include values, temporal information, location information, and other information. The spreadsheet data can be presented in a visualization, and the computer system can detect selection of a representation of a record in the visualization. The computer system can generate a query based upon record, submit the query to a search engine, and obtain results for presentation.

Подробнее
27-02-2014 дата публикации

SYSTEM AND METHOD FOR GENERATING COMPUTER RENDERED CLOTH

Номер: US20140055463A1
Автор: Massen Michael
Принадлежит:

A system, method and computer software program on a computer readable medium for loading cloth modeling data, generating an environmental model, generating a basic cloth model, and generating sections of a cloth surface model based on the basic cloth model and the cloth modeling data. The sections of the cloth surface model may be partial geometric forms, a portion of a ball and stick model or a non-uniform rational basis spline, and may be joined together and smoothed to form a complex cloth model. The smoothed cloth model may include a series of waves or folds in a computer rendered cloth surface to represent draped or compressed cloth on a three dimensional surface. 1. A method for generating a computer rendered cloth , wherein the method is executed on a computer configured to display graphic data , comprising the steps of:storing a cloth rendering matrix comprising nodes and anti-nodes interconnected by polygonal surfaces, wherein the nodes and anti-nodes represent peaks and troughs of two intersecting planar waves, wherein said two intersecting planar waves represent the vertical and horizontal compression forces occurring in cloth;loading cloth modeling data including at least one physical cloth modeling property; andcreating a target cloth model by applying the at least one physical cloth modeling property to the cloth rendering matrix, wherein said target cloth model comprises folds formed by said interconnected nodes and anti-nodes.2. The method of claim 1 , wherein the nodes and anti-nodes are interconnected such that they represent a repeated pattern of interconnected projecting and receding regions.3. The method of claim 2 , wherein each receding region comprising four triangular polygons interconnected at a peak of the receding region to form the anti-node of the receding region claim 2 , wherein each projecting region comprising eight triangular polygons interconnected at a peak of the projecting region to form the node of the projecting region claim ...

Подробнее
10-04-2014 дата публикации

CONTROL OF TIMING FOR ANIMATIONS IN DYNAMIC ICONS

Номер: US20140098108A1
Принадлежит: MICROSOFT CORPORATION

Dynamic icons are described that can employ animations, such as visual effects, audio, and other content that change with time. If multiple animations are scheduled to occur simultaneously, the timing of the animations can be controlled so that timing overlap of the animations is reduced. For example, the starting times of the animations can be staggered so that multiple animations are not initiated too close in time. It has been found that too much motion in the user interface can be distracting and cause confusion amongst users. 1. A method for displaying dynamic icons in a graphical user interface of a computing device , comprising:receiving first content to be animated in a first dynamic icon;starting animation of the first content in the first dynamic icon;receiving second content to be animated in a second dynamic icon;checking whether a difference in time overlap of the animations of the first content in the first dynamic icon and the second content in the second dynamic icon is below a predetermined threshold;if the difference in time overlap exceeds the predetermined threshold, controlling timing of the animation of the second content in the second dynamic icon so as to reduce the time overlap; andat a time different from a starting time of the animation of the first content in the first dynamic icon and in response to the controlling timing of the animation of the second content in the second dynamic icon, starting animation of the second content in the second dynamic icon.2. The method of claim 1 , further comprising associating the first and second contents with animations to be displayed in the respective first and second dynamic icons.3. The method of claim 1 , wherein the controlling of timing is performed in an animation manager that uses a state machine to ensure a staggered starting time of the animations.4. The method of claim 1 , further comprising randomizing timelines of the animations to randomly make the animations longer or shorter.5. The ...

Подробнее
07-01-2016 дата публикации

BREATHING APPARATUS WITH VENTILATION STRATEGY TOOL

Номер: US20160001024A1
Принадлежит: Maquet Critical Care AB

A system includes a breathing apparatus, a display unit and a processing unit that is operatively connected to the display unit. The processing unit is configured to provide a graphical visualization on the display unit. The graphical visualization in turn includes a combination of a target indication for at least one ventilation related parameter of a ventilation strategy for a patient ventilated by the apparatus, and a reciprocating animation of the at least one ventilation related parameter relative the target indication. The target indication is for instance based on input of a user, such as an operator of the breathing apparatus. Alternatively, or in addition, it may be a default value stored on a memory unit being operatively connected to the processing unit. Alternatively, or in addition, the target indication is based on a measurement value of said patient's physiology or anatomy. In this manner, the system informs clinicians in a clear and easily understandable way how a current patient ventilation is related to a chosen ventilation strategy. 1. A system including a breathing apparatus , a display unit and a processing unit being operatively connected to said display unit , said processing unit being configured to provide on said display unit a graphical visualization including a combination of:a target indication for at least one ventilation related parameter of a ventilation strategy for a patient ventilated by said apparatus, said target indication preferably being based on user input, such as of an operator of said breathing apparatus, a measurement value of said patient's physiology or anatomy, or a default value stored on a memory unit being operatively connected to said processing unit, anda reciprocating animation of said at least one ventilation related parameter relative said target indication.2. The system of claim 1 , wherein said target indication being displayed at a first position on said screen claim 1 , wherein said first position is fixed ...

Подробнее
06-01-2022 дата публикации

SYSTEM AND METHOD OF GENERATING FACIAL EXPRESSION OF A USER FOR VIRTUAL ENVIRONMENT

Номер: US20220005246A1
Принадлежит:

The present invention relates to a method of generating a facial expression of a user for a virtual environment. The method comprises obtaining a video and an associated speech of the user. Further, extracting in real-time at least one of one or more voice features and one or more text features based on the speech. Furthermore, identifying one or more phonemes in the speech. Thereafter, determining one or more facial features relating to the speech of the user using a pre-trained second learning model based on the one or more voice features, the one or more phonemes, the video and one or more previously generated facial features of the user. Finally, generating the facial expression of the user corresponding to the speech for an avatar representing the user in the virtual environment. 1. A method of generating a facial expression of a user for a virtual environment , the method comprises:obtaining, by a computing system, a video and an associated speech of the user;extracting in real-time, by the computing system, at least one of one or more voice features and one or more text features based on the speech of the user;identifying in real-time, by the computing system, one or more phonemes in the speech using a pre-trained first learning model based on at least one of the one or more voice features and the one or more text features;determining in real-time, by the computing system, one or more facial features relating to the speech of the user using a pre-trained second learning model based on the one or more voice features, the one or more phonemes, the video and one or more previously generated facial features of the user; andgenerating in real-time, by the computing system, the facial expression of the user corresponding to the speech for an avatar representing the user in the virtual environment based on the one or more facial features.2. The method as claimed in claim 1 , wherein obtaining the video and the associated speech comprises one of:receiving the video ...

Подробнее
02-01-2020 дата публикации

Three-dimensional advertisements

Номер: US20200005361A1
Принадлежит: Google LLC

Computer-implemented methods for advertising a 3D object in a web browser are provided. In one aspect, a method includes obtaining modeling data for a 3D object, formatting the modeling data for display in an advertisement in a web browser, and providing the formatted modeling data to the web browser for display in the advertisement. The advertisement includes a display of at least a portion of the 3D object based on an initial default view or a user selected view based on a query received from the user. Systems and machine-readable media are also provided.

Подробнее
02-01-2020 дата публикации

Systems and Methods for Generating Virtual Item Displays

Номер: US20200005372A1
Принадлежит:

Systems, methods, and devices of the various embodiments enable virtual displays of an item, such as vehicle, to be generated. In an embodiment, a plurality of images of an item may be captured and annotation may be provided to one or more of the images. In an embodiment, the plurality of images may be displayed, and the transition between each of the plurality of images may be an animated process. In an embodiment, an item imaging system may comprise a structure including one or more cameras and one or more lights, and the item imaging system may be configured to automate at least a portion of the process for capturing the plurality of images of an item. 1. An item imaging system , comprising: an outer structure;', 'at least one light; and', 'at least one camera; and, 'an image capture area, comprising controlling the at least one camera to capture one or more images of an item; and', 'sending the one or more images to a server to be displayed in a virtual display of the item., 'a processor connected to the at least one camera, wherein the processor is configured with processor-executable instructions to perform operations comprising2. The item imaging system of claim 1 , wherein:the image capture area is a roundhouse comprising a turntable centered within the outer structure and connected to the processor;the outer structure is a dome;the at least one light comprises a plurality of lights configured to indirectly illuminate the item;the at least one camera comprises a first camera, a second camera, and a third camera, wherein each of the first camera, second camera, and third camera is configured to capture images of the item from a different angle;the processor is configured with processor executable instructions to control the plurality of lights and the turntable to rotate the item on the turntable and illuminate the item as one or more of the three cameras capture images of the item.3. The item imaging system of claim 2 , wherein:the plurality of lights ...

Подробнее
03-01-2019 дата публикации

TECHNOLOGIES FOR TIME-DELAYED AUGMENTED REALITY PRESENTATIONS

Номер: US20190005723A1
Принадлежит:

Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site. 1. An augmented reality (AR) server for presenting a time-delayed AR presentation , the AR server comprising:a user location mapper to determine a location of a plurality of user AR systems located within a presentation site; andan AR presentation manager to (i) identify an AR sensory stimulus event of an AR presentation to be presented within the presentation site, (ii) determine a time delay of the AR sensory stimulus event for each user AR system based on the location of the corresponding user AR system within the presentation site; and (ii) present the AR sensory stimulus event to each user AR system based on the determined time delay associated with the corresponding user AR system.2. The AR server of claim 1 , further comprising a master network clock claim 1 ,wherein to determine the time delay of the AR sensory stimulus event comprises to synchronize a network clock of each user AR system to the master network clock.3. The AR server of claim 1 , wherein to determine the time delay of the AR sensory stimulus event comprises to determine a time delay of the AR sensory stimulus event for each user AR system based on a ...

Подробнее
12-01-2017 дата публикации

Methods and Devices for Adjusting Chart Filters

Номер: US20170010776A1
Автор: Stewart Robin
Принадлежит:

A method at an electronic device with a touch-sensitive surface and a display includes displaying a first chart on the display. The first chart concurrently displays a first set of categories and each respective category has a corresponding visual mark displayed in the first chart. The method further includes detecting a first touch input at a location on the touch-sensitive surface that corresponds to a location on the display of a first visual mark for a first category in the first chart, and, in response to detecting the first touch input, removing the first category and the first visual mark from the first chart via an animated transition, and updating display of the first chart. The first visual mark moves in concert with movement of a finger contact in the first touch input during at least a portion of the animated transition. 1. A method , comprising: [ the first chart concurrently displays a first set of categories; and', 'each respective category in the first set of categories has a corresponding visual mark displayed in the first chart;, 'displaying a first chart on the display, wherein, 'detecting a first touch input at a location on the touch-sensitive surface that corresponds to a location on the display of a first visual mark for a first category in the first chart; and,', removing the first category and the first visual mark from the first chart via an animated transition, wherein the first visual mark moves in concert with movement of a finger contact in the first touch input during at least a portion of the animated transition; and', 'updating display of the first chart., 'in response to detecting the first touch input at the location on the touch-sensitive surface that corresponds to the location on the display of the first visual mark for the first category in the first chart], 'at an electronic device with a touch-sensitive surface and a display2. The method of claim 1 , wherein the first touch input is a drag gesture or a swipe gesture that ...

Подробнее
12-01-2017 дата публикации

Methods and Devices for Adjusting Chart Magnification Asymmetrically

Номер: US20170010792A1
Автор: Robin Stewart
Принадлежит: Tableau Software LLC

A method is performed at an electronic device with a touch-sensitive surface and a display. The method includes displaying a chart on the display. The chart has a horizontal axis and a vertical axis. The horizontal axis includes first horizontal scale markers. The vertical axis includes first vertical scale markers. The method also includes detecting a first touch input at a location on the touch-sensitive surface that corresponds to a location on the display of the chart. The method further includes, while detecting the first touch input: horizontally expanding a portion of the chart such that a distance between first horizontal scale markers increases; and maintaining a vertical scale of the chart such that a distance between first vertical scale markers remains the same.

Подробнее
14-01-2021 дата публикации

APPARATUS AND METHOD FOR GENERATING IMAGE

Номер: US20210012503A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An image generating apparatus includes: a display outputting an image; a memory storing one or more instructions; and a processor. The processor is configured to execute the one or more instructions to detect an object in an image including a plurality of frames, provide a plurality of candidate boundaries for masking the detected object, identify an optimal boundary by assessing the provided plurality of candidate boundaries, and generate a partial moving image with the object moving by using the optimal boundary. 1. An image generating apparatus comprising:a display configured to output an image;a memory configured to store one or more instructions; and detect an object in an image including a plurality of frames;', 'provide a plurality of candidate boundaries for masking the detected object;', 'identify an optimal boundary by assessing the plurality of candidate boundaries; and', 'generate a partial moving image with the object moving by using the optimal boundary., 'a processor configured to execute the one or more instructions to2. The image generating apparatus of claim 1 , wherein the processor is further configured to execute the one or more instructions to:mask the object in one of the plurality of frames by using the identified optimal boundary; andgenerate the partial moving image with the object moving by using the one of the plurality of frames in which the object is masked and the plurality of frames.3. The image generating apparatus of claim 1 , wherein the processor is further configured to execute the one or more instructions to provide the plurality of candidate boundaries for masking the object detected in the image by using a first artificial intelligence (AI) model.4. The image generating apparatus of claim 3 , wherein the first AI model includes a plurality of segmentation AI models claim 3 , andthe processor is further configured to execute the one or more instructions to provide the plurality of candidate boundaries by using the plurality of ...

Подробнее
21-01-2016 дата публикации

GENERATING AND USING A PREDICTIVE VIRTUAL PERSONFICATION

Номер: US20160019434A1
Принадлежит:

A system for generating a predictive virtual personification includes receiving, from an AV data source, a data store, and a saliency recognition engine, wherein the AV data source is configured to transmit one or more AV data sets to the saliency recognition engine, each AV data set includes a graphical representation of a donor subject, and the saliency recognition engine is configured to receive the AV data set and one or more identified trigger stimulus events, locate a set of saliency regions of interest (SROI) within the graphical representation of the donor subject, generate a set of SROI specific saliency maps and store, in the data store, a set of correlated SROI specific saliency maps generated by correlating each SROI specific saliency map a corresponding trigger event. 1. A method for generating a predictive virtual personification comprises:receiving, from an AV data source, one or more AV data sets;locating within each AV data set, with a saliency recognition engine, a graphical representation of a donor subject and a set of saliency regions of interest (SROI) within said graphical representation of the donor subject;identifying one or more trigger stimulus events, wherein each trigger stimulus events precedes or is contemporaneous with one or more SROI specific reactive responses and each SROI specific reactive response is observable within a SROI;generating, for each SROI, a set of SROI specific saliency maps, wherein each SROI specific saliency map plots a change in geospatial orientation of one or more SROIs within a predetermined time-frame corresponding to each trigger stimulus event; andstoring, in a data store, a set of correlated SROI specific saliency maps generated by correlating each SROI specific saliency map a corresponding trigger event.2. The method of claim 1 , further comprising identifying a set of donor-specific physio-emotional characteristics corresponding to a donor-specific physio-emotional state at the time of the trigger ...

Подробнее
03-02-2022 дата публикации

System and Method for Simulating an Immersive Three-Dimensional Virtual Reality Experience

Номер: US20220036659A1
Принадлежит: Individual

The present invention brings concerts directly to the people by streaming, preferably, 360° videos played back on a virtual reality headset and, thus, creating an immersive experience, allowing users to enjoy a performance of their favorite band at home while sitting in the living room. In some cases, 360° video material may not be available for a specific concert and the system has to fall back to traditional two-dimensional (2D) video material. For such cases, the present invention takes the limited space of a conventional video screen and expands it to a much wider canvas, by expanding color patterns of the video into the surrounding space. The invention may further provide seamless blending of the 2D medium into a 3D space and additionally enhancing the space with computer-generated effects and virtual objects that directly respond to the user's biometric data and/or visual and acoustic stimuli extracted from the played video.

Подробнее
03-02-2022 дата публикации

IMAGE DISPLAY METHOD, IMAGE DISPLAY APPARATUS, AND STORAGE MEDIUM STORING DISPLAY CONTROL PROGRAM

Номер: US20220036862A1
Автор: Yamada Yusuke
Принадлежит:

An image display method includes: displaying a first image having a first image surface on a display surface in a three-dimensional fashion; in response to a reception of an instruction of rotating the first image around an axis different from any axis in the display surface, rotating the first image around a first imaginary axis, the first imaginary axis being vertical to the first image surface and different from an axis vertical to the display surface; and displaying the rotated first image. 1. An image display method comprising:displaying a first image having a first image surface on a display surface in a three-dimensional fashion;in response to a reception of an instruction of rotating the first image around an axis different from any axis in the display surface, rotating the first image around a first imaginary axis, the first imaginary axis being vertical to the first image surface and different from an axis vertical to the display surface; anddisplaying the rotated first image.2. The image display method according to claim 1 , further comprising:displaying, in a two-dimensional fashion, an enlarged image related to the first image displayed in the three-dimensional fashion;in response to the reception of the instruction, rotating the enlarged image around the axis vertical to the display surface; anddisplaying the rotated, enlarged image.3. The image display method according to claim 2 , whereinin response to the reception of the instruction, the first image and the enlarged image are rotated in conjunction with each other.4. The image display method according to claim 1 , further comprising:displaying a second image having a second image surface on the display surface in the three-dimensional fashion;in response to the reception of the instruction, rotating the second image around a second imaginary axis, the second imaginary axis being vertical to the second image surface and different from the axis vertical to the display surface; anddisplaying the ...

Подробнее
22-01-2015 дата публикации

DEVICE AND METHOD FOR LOCALIZATION OF BRAIN FUNCTION RELATED APPLICATIONS

Номер: US20150025870A1
Автор: Marks Ronald
Принадлежит:

A method of simulating the activity of the human nervous system includes providing a networked server for access by a user of a general purpose computer, with a database having predetermined data on human nervous system activity being in communication with the networked server. User information is input into the general purpose computer which is correlated with data in the networked database to determine what part of the human nervous system is impacted by the user information input. The general purpose computer displays a simulated image of a portion of the human nervous system and animates the impacted part of the human nervous system determined in the correlation. 1. A device for providing a stimulus to a user , the device comprising:an interface attachment sized and shaped to be worn by a user thereof;a stimulator attached to said interface attachment for providing a stimulus to the user wearing the interface attachment;and a data link attached to said interface attachment and in electronic communication therewith, the data link adapted to transmit signals to, and receive signals from, a general purpose computer.2. The device according to wherein the interface attachment is selected from the group consisting of a head piece claim 1 , a sleeve claim 1 , and combinations thereof.3. The device according to wherein the stimulator is selected from the group consisting of a light claim 1 , a speaker claim 1 , a device for imparting an electric shock to a user claim 1 , and combinations thereof.4. The device according to wherein the data link is selected from the group consisting of a universal serial bus (USB) cable claim 1 , an Ethernet cable claim 1 , a wireless communications device claim 1 , and combinations thereof.5. The device according to claim 1 , wherein the device comprises a plurality of stimulators claim 1 , said stimulators comprising a plurality of lights attached to an exterior surface of said interface attachment claim 1 , the stimulators being ...

Подробнее
28-01-2016 дата публикации

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM

Номер: US20160026349A1
Принадлежит: SONY CORPORATION

An information processing device according to the present technology includes an action recognition unit that recognizes an operation action of a user based on sensor information, and an action representation generation unit that analyzes operation action data showing the operation action of the user recognized by the action recognition unit to generate an action segment represented by a meaning and content of the operation action from the operation action data. 1. (canceled)2. An information processing device comprising:an action recognition unit configured to recognize an operation action of a user based on sensor information;an action representation unit configured to generate an action segment represented by a context of the operation action for an action log comprising at least one action segment; andan input information processing unit configured to edit the action segment,wherein the action representation unit is configured to present an operation content candidate list, andwherein the input information processing unit is configured to edit the action segment based on a selection of a user from the operation content candidate list.3. The information processing device of claim 2 , whereinthe action representation unit is configured to present a location correction area of the action segment, andthe input information processing unit is configured to edit the action segment based on a correct location information input by a user from the location correction area.4. The information processing device of claim 2 , whereinthe representation unit is configured to present a location name list for a location correction area for the action segment, andthe input information processing unit is configured to edit the action segment based on a correct location information selected from the location name list.5. The information processing device of claim 4 , whereinthe location name list includes at least one of a building name, station name, or a shop name.6. The ...

Подробнее
28-01-2021 дата публикации

Mobile sensor apparatus for a head-worn visual output device usable in a vehicle, and method for operating a display system

Номер: US20210023984A1
Принадлежит: Audi AG

A mobile sensor apparatus includes a capture device for capturing vehicle movements of the vehicle and an interface for transmitting data relating to the vehicle movements to a head-worn visual output device. A display system includes the mobile sensor apparatus and the head-worn visual output device.

Подробнее
10-02-2022 дата публикации

ANIMATING DIGITAL GRAPHICS OVERLAID ON VISUAL MEDIA ITEMS BASED ON DYNAMIC ATTRIBUTES

Номер: US20220044465A1
Автор: Stukalov Dmitri
Принадлежит:

This disclosure covers methods, computer-readable media, and systems that animate a digital graphic associated with a video or other visual media item based on a detected dynamic attribute. In particular, the disclosed methods, computer-readable media, and systems detect sensor data from a client device or a motion of an object within a video or other visual media item. Based on the detected sensor data or motion of an object within a visual media item, the methods, computer-readable media, and systems overlay and animate an emoji or other digital graphic selected by a user on a video or other visual media item. 1present a set of digital graphics within a graphical user interface of the computing device;detect a selection by a user of a digital graphic from among the set of digital graphics;capture, utilizing a camera of the computing device, a video of a person performing a motion;present the digital graphic with an animation effect that mimics the motion performed by the person within the video; andbased on detecting an additional selection by the user, send a message comprising the video to a recipient client device to present, within a messaging thread, the video comprising the digital graphic with the animation effect that mimics the motion performed by the person.. A non-transitory computer readable medium storing instructions thereon that, when executed by at least one processor, cause a computing device to: The present application is a continuation of U.S. application Ser. No. 16/943,936, filed on Jul. 30, 2020, which is a continuation of U.S. application Ser. No. 16/664,479, filed on Oct. 25, 2019which issued as U.S. Pat. No. 10,740,947, which is a continuation of U.S. application Ser. No. 15/717,795, filed on Sep. 27, 2017which issued as U.S. Pat. No. 10,460,499Each of the aforementioned applications is hereby incorporated by reference in its entirety.Recent years have seen rapid development in systems that enable individuals to digitally communicate with ...

Подробнее
29-01-2015 дата публикации

Systems and Methods for Visually Creating and Editing Scrolling Actions

Номер: US20150029197A1
Принадлежит: ADOBE SYSTEMS INCORPORATED

Systems and methods for visually creating scroll-triggered animation in a document. Based on input received, a key position is determined that is associated with an element that is to be animated. An indicator may be displayed to visually show the location of the key position on an editing canvas. A scroll-triggered animation is defined for the element based on the specified key position. The scroll-triggered animation defines attributes of the element during scroll of the document in the end use environment. For example, the animation may specify that the element has a particular location when the scroll is at the specified key position. The scroll-triggered animation may additionally or alternatively comprise a before-effect and an after-effect, performing one animation before the scroll reaches the key position and another animation after the scroll reaches the key position. 1. A computer implemented method comprising:identifying an element on a canvas, the canvas displaying a document being edited in a development environment for use in an end use environment;determining, via a processor, based on input received on the canvas, a location of a key position associated with the element; anddefining a scroll-triggered animation for the element based on the key position, wherein the scroll-triggered animation defines an attribute of the element based on a scroll of the document in the end use environment.2. The method of further comprising changing the location of the key position based on receiving additional input on the canvas.3. The method of further comprising displaying an indicator on the canvas identifying the location of the key position.4. The method of further comprising:displaying an indicator on the canvas identifying the location of the key position, wherein the scroll is at the key position in the end use environment when the key position is at a reference location; andbased on receiving additional input moving the indicator on the canvas to a new ...

Подробнее
04-02-2016 дата публикации

Autogenerating video from text

Номер: US20160034754A1
Принадлежит: ELWHA LLC

A method of converting user-selected printed text to a synthesized image sequence is provided. The method includes capturing a first image of printed text and generating a model information associated with the text.

Подробнее
04-02-2016 дата публикации

METHOD AND APPARATUS FOR CONTROLLING DISPLAY AND COMPUTER PROGRAM FOR EXECUTING THE METHOD

Номер: US20160035119A1
Принадлежит:

A display control method includes displaying at least a portion of a page, in which one or more content regions including contents are arranged, on a screen image; recognizing a scrolling operation with respect to the page; and scrolling the page based on the scrolling operation and applying an animation with respect to content included in a content region exposed on the screen image as the page is scrolled. 1. A method , using a processor , for controlling display of a page on a screen of a terminal , comprising:displaying at least a portion of the page on a screen image, wherein one or more content regions including a content are arranged in the page;recognizing a scrolling operation with respect to the page; andscrolling the page based on the scrolling operation and applying an animation with respect to a content included in a content region exposed on the screen image as the page is scrolled.2. The display controlling method of claim 1 , wherein the animation comprises an operation for moving the content to a designated location in the content region claim 1 , andat least one of speed, distance, time, and direction for moving the content is determined based on a speed of the scrolling operation or speed for scrolling the page.3. The display controlling method of claim 2 , wherein the distance for moving the content claim 2 , which is calculated based on the speed for moving the content claim 2 , apart from the designated location is set as a starting point for moving the content claim 2 , andthe animation comprises an operation for moving the content from the starting point to the designated location.4. The display controlling method of claim 2 , wherein the time for moving the content is a pre-set fixed time claim 2 ,the speed for moving the content is set based on the speed of the scrolling operation or the speed for scrolling the page, andthe distance for moving the content is determined based on the time for moving the content and the speed for moving the ...

Подробнее
01-05-2014 дата публикации

Control for Digital Lighting

Номер: US20140118359A1
Принадлежит: PRODUCTION RESOURCE GROUP, LLC

A digitally controlled lighting system where aspects have a central media server connected to remote media servers. The connection may have separate networks for control versus media. Automatic synchronization of the contents of the media servers may be carried out. 2. The system as in claim 1 , wherein said at least one media item and said second media item are the same item of media.3. The system as in claim 2 , wherein said at least one media item and said second media item are different media items.4. The system as in claim 1 , wherein said first lighting device includes a first media server which serves media to said first lighting device claim 1 , and said second lighting device includes a second media server serving media to said second lighting device claim 1 , and wherein said genlock device is connected to said first and second media servers claim 1 , and causes both of said first and second media servers to produce synchronized frames at substantially the same time.5. The system as in claim 1 , wherein said first and second media servers receive media over said network connection claim 1 , and store said media for use at a later time.6. The system as in claim 1 , wherein said first and second media servers receive media over said network connection claim 1 , and display said media as a real-time streamed item.7. The system as in claim 1 , wherein said media server displays frames of an animation by cross fading between a first frame and a second frame in the frame sequence.8. A method claim 1 , comprising:storing a central database of media information in a central media server;communicating between said central media server and each of a plurality of local media servers, over a local network connection which connects between said central media server and each of said local media servers;determining a list of information that should be located in each of said local media servers, andsending information to said local media servers that causes each of said ...

Подробнее
01-02-2018 дата публикации

TRANSITIONING BETWEEN VISUAL REPRESENTATIONS

Номер: US20180033180A1
Автор: Geddes Jonathan
Принадлежит:

A transition may be made between first, second, and third visual representations. A first visual representation may be displayed, with a plurality of visual elements arranged in a first arrangement. A processor may generate a first animation in which the visual elements move from the first arrangement toward an intermediate arrangement, and then to a second arrangement corresponding to a second visual representation. The first animation and the second visual representation may be displayed. The processor may generate a second animation in which the visual elements move from the second arrangement toward the intermediate arrangement, and then to a third arrangement corresponding to a third visual representation. The second animation and the third visual representation may be displayed. Thus, smooth transitions may be provided between multiple visual representations via animation toward a common intermediate arrangement. 1. A method for transitioning between first , second , and third visual representations , the method comprising:at a display screen, displaying a first visual representation comprising a plurality of visual elements arranged in a first arrangement;at a processor, generating a first animation in which the visual elements move from the first arrangement toward an intermediate arrangement different from the first arrangement, and then to a second arrangement different from the first arrangement and the intermediate arrangement;at the display screen, displaying the first animation; andat the display screen, displaying a second visual representation comprising the plurality of visual elements arranged in the second arrangement.2. The method of claim 1 , further comprising:at the processor, generating a second animation in which the visual elements move from the second arrangement toward the intermediate arrangement, and then to a third arrangement different from the first arrangement, the second arrangement, and the intermediate arrangement;at the display ...

Подробнее
31-01-2019 дата публикации

METHOD AND APPARATUS FOR ENHANCING DIGITAL VIDEO EFFECTS (DVE)

Номер: US20190034062A1
Принадлежит:

A method and apparatus for enhancing digital video effects (DVE) operates to embed DVE functionality within a graphics modeling system and provides the user with an interface configured to present model elements to a user as controllable parameters. In order to embed DVE functionality, a dynamic data structure is introduced as a scene to allow the addition of user defined model elements. The user interface enables the identification of, and access to the newly introduced model elements. 1. A system for dynamically modifying digital video effects in broadcast video production , the system comprising:an input image processor configured to receive a live video;a digital video effects (DVE) controller configured to embed model elements of a graphics model into a graphics modeling system;a graphics model renderer configured to dynamically render a scene for the received live video, wherein the rendered scene comprises a dynamic data structure that provides an application programming interface (API) configured to automatically access and bind the embedded model elements of the graphics model to user controllable DVE parameters for the scene;a user interface configured to present the rendered scene having the user controllable DVE parameters as key-frames in a timeline to form live broadcast video effects for the received live video, with the presented user controllable DVE parameters being configured to receive image editing manipulations from a user via the user interface;an image editor configured to automatically manipulate the respective model elements of the graphics model in response to the received image editing manipulations of the corresponding user controllable DVE parameters in the rendered scene; andan output image processor configured to generate and output a modified output video of the live video based on the automatically manipulated model elements of the graphics model.2. The system of claim 1 , wherein the output image processor is further configured to: ...

Подробнее
31-01-2019 дата публикации

USING CINEMATIC TECHNIQUES TO PRESENT DATA

Номер: US20190034433A1
Принадлежит:

The present invention extends to methods, systems, and computer program products for using cinematic techniques to present data. Embodiments of the invention can be used to infer and generate cinematic techniques or combinations thereof based on a model and user action. Cinematic techniques can be used to meet the data exploration and analysis requirements of a user. As such, embodiments of the invention permit users (including non-programmers) to employ cinematic techniques (possibly in combination with other techniques) to gain insights into their data and also convey appropriate emotional messages. 1. A method for using cinematic techniques to present data in accordance with an intended message , the method performed at a computer system including one or more processors , system memory , a data repository , and a display device , the method comprising:accessing a portion of data from the data repository;accessing user requirements for presenting the portion of data, the user requirements indicating a message to convey when presenting the portion of data;accessing visualization metadata which identifies at least one visual object or property to which the portion of data can be bound, the visualization metadata further identifying how the at least one visual object or property can be transformed;identifying a mapping of one or more elements of the portion of data to the at least one visual object or property;identifying, based at least on the indicated message and on the identified mapping of data elements to the at least one visual object or property, one or more cinematic techniques for presenting data to convey the indicated message; anddisplaying the portion of data at the display device using the identified one or more cinematic techniques to convey the indicated message.2. The method as recited in claim 1 , wherein the visualization metadata comprises constraints on values that the at least one visual object or property can take.3. The method as recited in ...

Подробнее
30-01-2020 дата публикации

Devices, Methods, and Graphical User Interfaces for Messaging

Номер: US20200034033A1
Принадлежит:

An electronic device displays a messaging user interface of a message application, including a conversation transcript of a messaging session between a user of the electronic device and at least one other user, a message-input area, at least one avatar corresponding to a first other user included in the messaging session, and an application affordance. The device detects an input on the touch sensitive surface. In accordance with a determination that the input corresponds to selection of the at least one avatar displayed in the messaging user interface, the device displays a menu that contains a plurality of activatable menu items associated with the at least one avatar overlaid on the messaging user interface. In accordance with a determination that the input corresponds to selection of the application affordance, the device displays a plurality of application launch icons for a plurality of corresponding applications within the messaging user interface 1. A method , comprising: displaying a messaging user interface on the display, the messaging user interface including a conversation transcript of a messaging session between a user of the electronic device and at least one other user, a message-input area, at least one avatar corresponding to a first other user included in the messaging session, and an application affordance;', 'while displaying the messaging user interface, detecting a first input on the touch sensitive surface;', in accordance with a determination that the first input corresponds to selection of the at least one avatar displayed in the messaging user interface, displaying a menu that contains a plurality of activatable menu items associated with the at least one avatar overlaid on the messaging user interface; and', 'in accordance with a determination that the first input corresponds to selection of the application affordance, displaying a plurality of application launch icons for a plurality of corresponding applications within the messaging ...

Подробнее
04-02-2021 дата публикации

UTILIZING A MACHINE LEARNING MODEL TO DETERMINE ANONYMIZED AVATARS FOR EMPLOYMENT INTERVIEWS

Номер: US20210035047A1
Принадлежит:

A device receives interviewer data, associated with interviewers conducting interviews with interviewees, that includes data identifying avatars presented to the interviewers. The device receives interviewee data, associated with the interviewees, that includes data identifying genders of the interviewees. The device processes the interviewer data and the interviewee data, with a model, to generate unbiased training data, and trains a machine learning model, with the unbiased training data, to generate a trained machine learning model. The device receives particular interviewer data identifying a particular role, location, and/or gender of a particular interviewer, and receives particular interviewee data identifying a gender of a particular interviewee. The device processes the particular interviewer data and the particular interviewee data, with the trained machine learning model, to determine one or more anonymized avatars to present to the particular interviewer, and performs one or more actions based on the one or more anonymized avatars. 1. A method , comprising: a particular role of the particular interviewer,', 'a particular location of the particular interviewer, or', 'a gender of the particular interviewer;, 'wherein the particular interviewer data includes data identifying one or more of, 'receiving, by a device and from a user device, particular interviewer data associated with a particular interviewer,'} 'wherein the particular interviewee data includes data identifying a gender of the particular interviewee;', 'receiving, by the device, particular interviewee data associated with a particular interviewee,'}processing, by the device, the particular interviewer data and the particular interviewee data, with a machine learning model, to determine one or more avatars to present to the particular interviewer;receiving, by the device, first video data of the particular interviewee, the first video data including voice data of the particular interviewee; ...

Подробнее
30-01-2020 дата публикации

INTERACTIVE IMAGE FILE FORMAT

Номер: US20200037033A1
Принадлежит:

Embodiments of the present invention disclose a method, computer program product, and system for an interactive image file. The computer may receive at least one image and a plurality of business logic associated with the at least one image. The at least one image may be encoded into at least one image block, wherein the at least one image block contains a plurality of images stored within a single file at different offsets. A table of contents may be generated for the at least one image block, wherein the table of contents contains a list of the at least one image block, a location of the at least one image block, and an identifier for each of the at least one image block. A single image file may be created for the interactive image file. 1. A method for an interactive image file , the method comprising:receiving, by a computer, at least one image and a plurality of business logic associated with the at least one image;encoding the at least one image and plurality of business logic associated with the at least one image into at least one image block, wherein the at least one image block contains at least one image stored within a single file at different offsets, and wherein the plurality of business logic defines display information and user event response information for the at least one image and utilizes an external application programming interface to activate image blocks;generating a table of contents for the at least one image block, wherein the table of contents contains a list of the at least one image block, a location of the at least one image block, and an identifier for each of the at least one image block; andcreating a single image file for the interactive image file.2. The method of claim 1 , wherein the plurality of business logic is a code claim 1 , a script claim 1 , or a visualization that explains how the at least one image is to be displayed and a plurality of user events the at least one image is capable of processing.3. The method of ...

Подробнее
24-02-2022 дата публикации

IMAGE REGULARIZATION AND RETARGETING SYSTEM

Номер: US20220058849A1
Принадлежит:

Systems and methods for image retargeting are provided. Image data may be acquired that includes motion capture data indicative of motion of a plurality of markers disposed on a surface of a first subject. Each of the markers may be associated with a respective location on the first subject. A plurality of blendshapes may be calculated for the motion capture data based on a configuration of the markers. An error function may be identified for the plurality of blendshapes, and it may be determined that the plurality of blendshapes can be used to retarget a second subject based on the error function. The plurality of blendshapes may then be applied to a second subject to generate a new animation. 1obtaining a body surface according to respective marker points on a body surface of a physical subject of an image;generating a representation of the body surface by reference to a group of known body surface positions that are associated with one or more muscles, wherein the representation is obtained by applying a weight to at least one of the plurality of known body surface positions, wherein the representation is generated from a physiological based energy equation, wherein the physiological based energy equation uses L1-norm regularization; anddetermining selection of a plurality of shape primitives for retargeting of the representation of the body surface to a desired physical state portrayed by at least one virtual subject based on whether a result of evaluating the plurality of shape primitives according to an error function falls below an error threshold;{'claim-text': 'wherein at least one of the plurality of shape primitives is based on a physiological characteristic of the subject of the image having a lower simulated energy of activated muscle control features, calculated by the physiological based energy equation, than a simulated energy from an activation of a subset of the one or more muscles.', '#text': 'wherein the error function comprises a weighted ...

Подробнее
07-02-2019 дата публикации

Picture dynamic display method, electronic equipment and storage medium

Номер: US20190042598A1
Автор: Shuxiong CAI, Zihong Xie
Принадлежит: Tencent Technology Shenzhen Co Ltd

The present disclosure relates to a picture dynamic display method performed at a computing device. After acquiring a plurality of pictures, the computing device determines a display sequence of the acquired pictures. For each acquired picture, the computing device determines a corresponding local trajectory within a complete trajectory according to the display sequence. The computing device then draws corresponding local trajectories in turn according to the display sequence and displays a corresponding acquired picture in a display region corresponding to each drawn local trajectory. The acquired picture is continuously drawn according to a corresponding transitional trajectory in the complete trajectory while the local trajectory transits to a subsequent local trajectory according to the display sequence.

Подробнее
19-02-2015 дата публикации

Preloading Animation Files In A Memory Of A Client Device

Номер: US20150049093A1
Принадлежит:

A digital magazine presents content items to a user including one or more animation files. An animation file includes a plurality of frames that each has a variable display duration. To improve presentation of an animation file, a number of frames of the animation file that are preloaded into a memory of the client device on which the animation file is presented is determined based on contextual features describing computing resources available to the client device and on the display duration of frames of the animation file subsequent to a currently displayed frame of the animation file. Additionally, an animation file may be selected for preloading and display from a plurality of animation files based on a ranking the animation files. 1. A method for loading an animation file into a memory of a client device , the method comprising:accessing the animation file, the animation file comprising a plurality of frames, each frame having a variable display duration;displaying the animation file in a portion of a display device of the client device;obtaining one or more contextual features about the client device, the contextual features describing computing resources used by the client device when displaying content; anddetermining a number of frames of the animation file subsequent to a current frame displayed by the display device to preload into the memory based on the contextual features of the client device and one or more display durations associated with one or more frames subsequent to the current frame.2. The method of claim 1 , wherein the plurality of frames in the animation file comprise a series of images to be displayed in a sequence having a temporal order claim 1 , the sequence repeating when the last image in the series of images is displayed.3. The method of claim 1 , further comprising:obtaining one or more updated contextual features about the client device while the animation file is displayed in the portion of the display device; anddetermining a ...

Подробнее
25-02-2021 дата публикации

MEDIA FILE PROCESSING METHOD AND DEVICE, AND MEDIA FILE SHARING METHOD AND DEVICE IN SOCIAL MEDIA APPLICATION

Номер: US20210056132A1
Автор: WU Fengkai

A media file processing method is described. Multiple selected media files are processed by a terminal device comprising a processor and a memory storing computer readable instructions executed by the processor. The terminal device extracts content association information corresponding to the multiple selected media files. The terminal device further synthesizes the multiple selected media files according to the content association information to obtain an animation that is a dynamic presentation of contents of the multiple selected media files. The animation is then stored in a predetermined file format in the terminal device. 1. A media file processing method , comprising:obtaining multiple selected media files;extracting a first content association information and a second content association information corresponding to each of the multiple selected media files;aggregating the multiple selected media files according to a similarity of the first content association information between the multiple selected media files to form multiple file sequences;ordering the media files within each of the multiple file sequences according to the second content association information to generate ordered file sequences; andgenerating an animation containing multiple animation segments, each animation segment corresponding to one of the ordered file sequences and based on content of the media files within the one of the ordered file sequences, and each animation segment preceded with a tag image characteristic of the first content association information of its media files.2. The method of claim 1 , wherein the multiple selected media files comprise image or video files.3. The method of claim 1 , wherein the first content association information indicates a shooting location of a media file.4. The method of claim 3 , wherein the tag image for an animation segment of the multiple animation segments comprises a characteristic landscape image of the shooting location of an the ...

Подробнее
25-02-2016 дата публикации

Methods and Systems for Augmented Reality to Display Virtual Representations of Robotic Device Actions

Номер: US20160055677A1
Автор: Kuffner James Joseph
Принадлежит:

Example methods and systems for augmented reality interfaces to display virtual representations of robotic device actions are provided. An example method includes receiving information that indicates an action or an intent of a robotic device to perform a task, and the action or the intent includes one or more of a planned trajectory of the robotic device to perform at least a portion of the task and an object to be handled by the robotic device to perform at least a portion of the task. The method also includes providing, for display by a computing device on an augmented reality interface, a virtual representation of the action or the intent, and the virtual representation includes as annotations on the augmented reality interface at least a portion of the planned trajectory of the robotic device or highlighting the object to be handled by the robotic device. 1. A computer-implemented method comprising:receiving information that indicates an action or an intent of a robotic device to perform a task, wherein the action or the intent includes one or more of a planned trajectory of the robotic device to perform at least a portion of the task and an object to be handled by the robotic device to perform at least a portion of the task; andproviding, for display by a computing device on an augmented reality interface, a virtual representation of the action or the intent, wherein the virtual representation includes as annotations on the augmented reality interface at least a portion of the planned trajectory of the robotic device or highlighting the object to be handled by the robotic device.2. The method of claim 1 , wherein the computing device is located remotely from the robotic device claim 1 , and wherein providing claim 1 , for display claim 1 , comprises providing the virtual representation of the action or the intent overlaid onto a field of view of the computing device.3. The method of claim 1 , wherein the computing device is located remotely from the robotic ...

Подробнее
05-03-2015 дата публикации

LOW POWER DESIGN FOR AUTONOMOUS ANIMATION

Номер: US20150062130A1
Принадлежит: BlackBerry Limited

Devices, methods, and non-transitory media for controlling a microdisplay are described. A device includes a microdisplay; and a display controller for the microdisplay. The display controller includes at least one frame buffer configured to store a multi-frame animation; and control logic configured to control operation of the microdisplay in response to signals generated by a host controller of the mobile electronic device, the control logic comprising executable instructions to display the animation on the microdisplay by: commencing display of the animation when signals representing a start command generated by the host controller are detected, and repeatedly displaying the animation in the absence of detecting further signals representing commands generated by the host controller. 1. An electronic device comprising:a microdisplay; and at least one frame buffer configured to store a multi-frame animation; and', commencing display of the animation when signals representing a start command generated by the host controller are detected, and', 'repeatedly displaying the animation in the absence of detecting further signals representing commands generated by the host controller., 'control logic configured to control operation of the microdisplay in response to signals generated by a host controller of the mobile electronic device, the control logic comprising executable instructions to display the animation on the microdisplay by], 'a display controller for the microdisplay, the display controller comprising2. The electronic device of claim 1 , wherein the host controller is configured to generate the signals representing the start command when operating in a high-power mode of operation.3. The electronic device of claim 2 , wherein the host controller is configured to generate the signals representing the start command after transitioning to the high-power mode of operation following operation for a period in a low-power mode of operation.4. The electronic device of ...

Подробнее
05-03-2015 дата публикации

Run-time techniques for playing large-scale cloud-based animations

Номер: US20150062131A1
Принадлежит: ToyTalk Inc

Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.

Подробнее
22-05-2014 дата публикации

INTERACTIVE SCOREKEEPING AND ANIMATION GENERATION

Номер: US20140139531A1
Принадлежит: EyezOnBaseball, LLC

Techniques for interactive scorekeeping and animation generation are described, including evaluating a play to form an event datum associated with execution of the play, using the event datum to form an event packet, generating an animation using the event packet, the animation being associated with the execution of the play, and presenting the animation on an endpoint. 1. (canceled)2. A method , comprising:receiving at a user interface data associated with a characteristic associated with execution of a play in a game and data associated with a setting associated with an animation sequence;generating an event packet based on the characteristic associated with the execution of the play, the event packet comprising a first tag associated with a first animation and a second tag associated with a second animation;generating the animation sequence using the first animation and the second animation, the animation sequence being configured to be modified based on the setting associated with the animation sequence; andcausing presentation of the animation sequence at an endpoint, the animation sequence comprising an avatar representing a player in the game and configured to be displayed using one or more modifiable viewing angles.3. The method of claim 2 , further comprising:updating a score of the game based on the characteristic associated with the execution of the play; andcausing presentation of the animation sequence and the score at the endpoint.4. The method of claim 2 , further comprising:determining a result of the play based on the characteristic associated with the execution of the play; andcausing presentation of another animation at the endpoint, the another animation associated with the result of the play.5. The method of claim 2 , wherein the setting associated with the animation sequence is associated with a characteristic of the avatar.6. The method of claim 2 , wherein the setting associated with the animation sequence is associated with an environmental ...

Подробнее
01-03-2018 дата публикации

SYSTEM AND METHOD OF BANDWIDTH-SENSITIVE RENDERING OF A FOCAL AREA OF AN ANIMATION

Номер: US20180061084A1
Принадлежит:

Individual images for individual frames of an animation may be rendered to include individual focal areas. A focal area may include one or more of a foveal region corresponding to a gaze direction of a user, an area surrounding the foveal region, and/or other components. The foveal region may comprise a region along the user's line of sight that permits high visual acuity with respect to a periphery of the line of sight. A focal area within an image may be rendered based on parameter values of rendering parameters that are different from parameter values for an area outside the focal area. 1. A system configured for bandwidth-sensitive rendering of a focal area of an animation presented on a display , wherein the animation includes a sequence of frames , the sequence of frames including a first frame , the system comprising: obtain state information describing state of a virtual space, the state at an individual point in time defining one or more virtual objects within the virtual space and their positions;', 'determine a field of view of the virtual space, the frames of the animation being images of the virtual space within the field of view, such that the first frame is an image of the virtual space within the field of view at a point in time that corresponds to the first frame;', "determine a focal area within the field of view, the focal area including a foveal region corresponding to a gaze direction of a user within the field of view and an area surrounding the foveal region, the gaze direction defining a line of sight, wherein the foveal region is a region along a user's line of sight that permits high visual acuity with respect to a periphery of the line of sight; and", 'render, from the state information, individual images for individual frames of the animation, individual images depicting the virtual space within the field of view determined at individual points in time that corresponds to individual frames, the rendered images including a first image for ...

Подробнее
04-03-2021 дата публикации

CONTROL METHOD OF USER INTERFACE AND ELECTRONIC DEVICE

Номер: US20210064229A1
Принадлежит:

A controlling method of a user interface and an electronic device are provided. A touch element includes a start area, a trigger area and a track area connecting the start area and the trigger area. The controlling method of the user interface includes following steps: entering a startup interface display mode according to the touch behavior performed on the touch element; generating continuous touch data in response to the touch behavior when the touch behavior moves from the start area to the track area and an animation trigger condition is satisfied, activating an animation mode according to the continuous touch data; and generating the continuous touch data in response to the touch behavior when the touch behavior moves from the start area to the track area and from the track area to the trigger area, and opening a user interface according to the continuous touch data. 1. A controlling method of a user interface , applied to an electronic device , the electronic device includes a touch element and a screen , the touch element includes a start area , a trigger area , and a track area connecting the start area and the trigger area , the controlling method comprising:entering a startup interface display mode according to the touch behavior performed on the touch element;generating continuous touch data in response to the touch behavior when the touch behavior moves from the start area to the track area and an animation trigger condition is satisfied, activating an animation mode according to the continuous touch data; andgenerating the continuous touch data in response to the touch behavior when the touch behavior moves from the start area to the track area and from the track area to the trigger area, and opening a user interface according to the continuous touch data.2. The controlling method of the user interface according to claim 1 , wherein the user interface includes a plurality of interface icons claim 1 , when the user interface is opened and the touch ...

Подробнее
02-03-2017 дата публикации

Systems and methods for assembling and/or displaying multimedia objects, modules or presentations

Номер: US20170060857A1
Принадлежит: Individual

Aspects of the present innovations relate to systems and/or methods involving multimedia modules, objects or animations. According to an illustrative implementation, one method may include accepting at least one input keyword relating to a subject for the animation and performing processing associated with templates. Further, templates may generates different types of output, and each template may include components for display time, screen location, and animation parameters. Other aspects of the innovations may involve providing search results, retrieving data from a plurality of web sites or data collections, assembling information into multimedia modules or animations, and/or providing module or animation for playback.

Подробнее
20-02-2020 дата публикации

ANIMATING DIGITAL GRAPHICS OVERLAID ON VISUAL MEDIA ITEMS BASED ON DYNAMIC ATTRIBUTES

Номер: US20200058151A1
Автор: Stukalov Dmitri
Принадлежит:

This disclosure covers methods, computer-readable media, and systems that animate a digital graphic associated with a video or other visual media item based on a detected dynamic attribute. In particular, the disclosed methods, computer-readable media, and systems detect sensor data from a client device or a motion of an object within a video or other visual media item. Based on the detected sensor data or motion of an object within a visual media item, the methods, computer-readable media, and systems overlay and animate an emoji or other digital graphic selected by a user on a video or other visual media item. 1. (canceled)2. A non-transitory computer readable medium storing instructions thereon that , when executed by at least one processor , cause a computing device to:present a visual media item within a graphical user interface of the computing device;detect a selection by a user to overlay a digital graphic on the visual media item within the graphical user interface;detect a magnitude and a direction of movement indicated by sensor data while presenting the visual media item; andpresent the digital graphic as an overlay on the visual media item with an animation effect having a characteristic according to the magnitude and the direction of the movement indicated by the sensor data.3. The non-transitory computer readable medium of claim 2 , further comprising instructions that claim 2 , when executed by the at least one processor claim 2 , cause the computing device to detect the magnitude of the movement indicated by the sensor data by analyzing the sensor data from an accelerometer claim 2 , a gyroscope claim 2 , a light sensor claim 2 , or a Global Positioning System receiver of the computing device.4. The non-transitory computer readable medium of claim 2 , further comprising instructions that claim 2 , when executed by the at least one processor claim 2 , cause the computing device to present the digital graphic as the overlay on the visual media item ...

Подробнее
02-03-2017 дата публикации

MODELING METHOD AND APPARATUS USING FLUID ANIMATION GRAPH

Номер: US20170061666A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A modeling method searches for a sequence matched to a user input using a fluid animation graph generated based on similarities among frames included in sequences included in the fluid animation graph and models a movement corresponding to the user input based on a result of the searching. Provided also is a corresponding apparatus and a method for preprocessing for such modeling. 1. A modeling method , comprising:searching for a sequence matched to a user input using a fluid animation graph generated based on similarities among frames included in sequences included in the fluid animation graph; andmodeling a movement corresponding to the user input based on a result of the searching.2. The method of claim 1 , wherein the modeling includes modeling the movement corresponding to the user input by blending sequences similar to the user input from among the sequences included in the fluid animation graph in response to the sequence matched to the user input not being retrieved.3. The method of claim 2 , wherein the modeling includes blending the sequences similar to the user input based on at least one of velocity information and form information of the sequences included in the fluid animation graph.4. The method of claim 2 , wherein the modeling comprises:searching for retrieved sequences similar to the user input from among the sequences included in the fluid animation graph;generating a blending sequence corresponding to the user input by blending the retrieved sequences; andmodeling the movement corresponding to the user input using the blending sequence.5. The method of claim 4 , wherein the searching for the retrieved sequences similar to the user input includes searching for the retrieved sequences similar to the user input based on a tag added to the sequences included in the fluid animation graph.6. The method of claim 4 , wherein the generating of the blending sequence comprises:extracting a blending weight based on the retrieved sequences and the user input ...

Подробнее
04-03-2021 дата публикации

TECHNOLOGIES FOR TIME-DELAYED AUGMENTED REALITY PRESENTATIONS

Номер: US20210065456A1
Принадлежит:

Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site. 125-. (canceled)26. A server to provide augmented reality (AR) presentations , the server comprising:memory; and identify a time-delayable AR sensory stimulus event of an AR presentation to be provided to a first user system at a first location and to a second user system at a second location;', 'determine a first time delay of the AR sensory stimulus event for the first user system based on a position of the first location relative to a point of origin of the AR sensory stimulus event;', 'determine a second time delay of the AR sensory stimulus event for the second user system based on a position of the second location relative to the point of origin, the second time delay different than the first time delay, the second time delay to be longer than a duration for a real-world sensory stimulus corresponding to the AR sensory stimulus event to traverse a distance between the second location and the point of origin according to laws of physics; and', 'provide the AR sensory stimulus event to the first and second user systems to cause the first and second user systems to present the AR sensory stimulus event at different points ...

Подробнее
03-03-2016 дата публикации

EXPORTING ANIMATIONS FROM A PRESENTATION SYSTEM

Номер: US20160065992A1
Автор: Krishna Om
Принадлежит:

A user input mechanism is displayed within a presentation system that allows a user to specify a certain portion of a selected slide (in a slide presentation), that has animations applied to it, that is to be exported in a selected export format. Information describing the specified portion of the selected slide, and information describing the animations applied to that portion, is obtained. An export file is generated with the specified portions of the slide, and the corresponding animations, in the selected export format. 1. A computing system , comprising:an object identifier component that generates an object selection user input mechanism that is actuated to select a subset of objects from a plurality of objects on a presentation display, the subset of objects having corresponding animations; andan export format selector component that generates a format selection user input mechanism that is actuated to select an export format for exporting the selected subset of objects and corresponding animations, and that invokes an export format generation engine to generate an export file in the selected export format, the export file including the selected subset of objects and corresponding animations in the selected export format.2. The computing system of and further comprising:a presentation system that generates slide presentation user input mechanisms that are actuated to generate the presentation display.3. The computing system of wherein the object identifier component and the export format selector component are part of the presentation system.4. The computing system of wherein the export format generation engine comprises:an internal export format generation engine that is internal to the presentation system.5. The computing system of wherein the export format generation engine comprises:an external export format generation engine that is external to the presentation system.6. The computing system of wherein the export format generation engine comprises:a ...

Подробнее
17-03-2022 дата публикации

VEHICLE CONTROLLER, VEHICLE DISPLAY SYSTEM, AND VEHICLE DISPLAY CONTROL METHOD

Номер: US20220083305A1
Автор: Ino Yuko
Принадлежит:

A vehicle controller is configured to control a plurality of display units disposed in a vehicle interior. The vehicle controller includes: a single display processing unit that is configured to perform preprocessing on a plurality of link images for the plurality of display units such that each of the plurality of link images is able to be output to a corresponding one of the plurality of display units, the plurality of link images being to be displayed in a linked manner across the plurality of display units; and a plurality of image output units each of which is configured to output a respective one of the plurality of link images to the corresponding one of the plurality of display units after the plurality of link images are subject to the preprocessing by the single display processing unit. 1. A vehicle controller that is configured to control a plurality of display units disposed in a vehicle interior , the vehicle controller comprising:a single display processing unit that is configured to perform preprocessing on a plurality of link images for the plurality of display units such that each of the plurality of link images is able to be output to a corresponding one of the plurality of display units, the plurality of link images being to be displayed in a linked manner across the plurality of display units; and{'claim-text': ['specify a region of the respective one of the plurality of link images in the mixed image that is subject to the preprocessing by the single display processing unit; and', 'output the respective one of the plurality of link images to the corresponding one of the plurality of display units, and'], '#text': 'a plurality of image output units each of which is configured to output a respective one of the plurality of link images to the corresponding one of the plurality of display units after the plurality of link images are subject to the preprocessing by the single display processing unit, wherein the single display processing unit is ...

Подробнее
08-03-2018 дата публикации

Coded Image Display and Animation System

Номер: US20180068608A1
Принадлежит:

A coded image display and animation system with an image decoding panel and a coded image panel, one such panel comprising an actuated panel. The image decoding panel comprises a shutter element panel or a lenticular panel. A progressive drive mechanism, such as a sloped surface, advances the actuated panel progressively, and a rapid-return mechanism, such as a biasing member, rapidly retracts the actuated panel when it is periodically freed to move in a longitudinally retracting direction. The sloped surface can be that of a wheel, and the actuated panel can be freed to retract by a ridge in the wheel. The sloped surface can be a helical formation in a rotatable member, and the rapid-return mechanism can be a longitudinal formation in the rotatable member contiguous with the helical formation. The actuated panel can be advanced and retracted by a distance equal to the width of one image cluster. 1. A coded image display and animation system comprising:an image decoding panel;a coded image panel retained in facing juxtaposition with the image decoding panel;wherein either the image decoding panel or the coded image panel comprises an actuated panel;a progressive drive mechanism operative to advance the actuated panel progressively in a first, longitudinally advancing direction over an advancing distance; anda rapid-return mechanism operative to retract the actuated panel in a second, longitudinally retracting direction over a retracting distance;wherein the rapid-return mechanism retracts the actuated panel at a speed greater than a speed at which the progressive drive mechanism advances the actuated panel.2. The coded image display and animation system of wherein the image decoding panel comprises a shutter element panel.3. The coded image display and animation system of wherein the image decoding panel comprises a lenticular panel.4. The coded image display and animation system of wherein the retracting distance is approximately equal to the advancing distance.5. ...

Подробнее
05-06-2014 дата публикации

ELECTRONIC DEVICE AND IMAGE PROCESSING METHOD

Номер: US20140153836A1
Автор: TOBITA Yoshikata
Принадлежит:

According to one embodiment, an electronic device includes an analyzer, an image selector, an effect selector and a generator. The analyzer is configured to analyze an attribute of each of a plurality of images. The image selector is configured to select, from the plurality of images, a first image which comprises a target and a second image which does not comprise the target, based on the attribute. The effect selector is configured to select a first effect, and to select a second effect. The generator is configured to generate a moving picture by compositing a third image obtained by applying the first effect to the first image, and a fourth image obtained by applying the second effect to the second image. 1. An electronic device comprising:an analyzer configured to analyze an attribute of each of a plurality of images;an image selector configured to select, from the plurality of images, a first image which comprises a target and a second image which does not comprise the target, based on the attribute;an effect selector configured to select a first effect, and to select a second effect; anda generator configured to generate a moving picture by compositing a third image obtained by applying the first effect to the first image, and a fourth image obtained by applying the second effect to the second image.2. The electronic device of claim 1 , further comprising a classification module configured to classify the plurality of images into a first group comprising first images claim 1 , and a second group comprising second images claim 1 ,wherein the generator is configured to generate the third image by applying the first effect to the first images in the first group, and to generate the fourth image by applying the second effect to the second images in the second group.3. The electronic device of claim 2 , further comprising a storage configured to store a first scenario wherein a plurality of first effects are defined claim 2 , and a second scenario wherein a ...

Подробнее
17-03-2016 дата публикации

Animation arrangement

Номер: US20160078661A1
Автор: Danut-Petru IURASCU
Принадлежит: Continental Automotive GmbH

An animation arrangement for a vehicle is provided. The animation arrangement has a display device, configured to display an animation based on an instruction set, a storage device configured to store a first instruction set and a second instruction set for displaying the same animation on the display device, and a calculating device configured to select one of the first and second instruction sets for displaying an animation on the display device. The calculating device is configured to select one of the first and second instruction sets for displaying an animation on the display device based on a load parameter of the calculating device.

Подробнее
15-03-2018 дата публикации

System and method for display object bitmap caching

Номер: US20180075638A1
Автор: Raymond Cook
Принадлежит: Electronic Arts Inc

A system and method for recursively rendering, caching, and/or retrieving a display object bitmap is provided. In some implementations, an image may be rendered on a client computing platform using an image list of one or more bitmap objects. The one or more object bitmaps may be generated in response to obtaining information defining a vector image in an image frame in an animation. An image list may be maintained for an image to be rendered based on the vector image of a frame of animation and/or some or all of the frames in the animation. The image list may store one or more references to one or more respective bitmap objects that are associated with the image to be rendered.

Подробнее
15-03-2018 дата публикации

Method, System, and Apparatus for Operating a Kinetic Typography Service

Номер: US20180077362A1
Принадлежит: The Aleph Group Pte., Ltd.

The present inventive subject matter is drawn to method, system, and apparatus for generating video content related to an audio media asset. In one aspect of this invention, a method for operating a kinetic typography service on the audio media asset stored in a computer memory is presented, where a set of subtitle items belonging to the audio media asset is obtained; a plurality of preset animation items are produced; generating a plurality of video frames by applying a transformation effect of a preset animation to a corresponding subtitle item; and producing a video media asset. 1. A computer-implemented method for operating a kinetic typography service on an audio media asset , comprising the steps of:providing access to a computer memory configured to store an audio media asset;determining whether a subtitle file (SRT) of the audio media asset exists;in response to determining the SRT of the audio media asset exists, obtaining a set of subtitle items from the SRT;in response to determining the SRT of the audio media asset does not exist, obtaining the set of subtitle items manually;producing a preset animation corresponding to each subtitle item in the set of subtitle items;generating a plurality of video frames by applying a transformation effect of the preset animation to a corresponding subtitle item; andconnecting the plurality of video frames to produce a video media asset.2. The method of claim 1 , further comprising the steps of:determining if a background image exists for each subtitle item of the set of subtitle items; andin response to determining the background image exists for a subtitle item, adding the background image to a preset animation set;3. The method of claim 2 , wherein the preset animation set comprises a word animation claim 2 , a group of words animation claim 2 , or a line animation.4. The method of claim 1 , wherein producing a preset animation corresponding to each subtitle item in the set of subtitle items comprising the step of ...

Подробнее
18-03-2021 дата публикации

System and Method for Talking Avatar

Номер: US20210082452A1
Автор: Woffenden Carl Adrian
Принадлежит:

Aspects of this disclosure provide techniques for generating a viseme and corresponding intensity pair. In some embodiments, the method includes generating, by a server, a viseme and corresponding intensity pair based at least on one of a clean vocal track or corresponding transcription. The method may include generating, by the server, a compressed audio file based at least on one of the viseme, the corresponding intensity, music, or visual offset. The method may further include generating, by the server or a client end application, a buffer of raw pulse-code modulated (PCM) data based on decoding at least a part of the compressed audio file, where the viseme is scheduled to align with a corresponding phoneme. 1. A method for generating a viseme and corresponding intensity pair , comprising:generating, by a server, a viseme and corresponding intensity pair based at least on one of a clean vocal track or corresponding transcription;generating, by the server, a compressed audio file based at least on one of the viseme, the corresponding intensity, music, or visual offset; andgenerating, by the server or a client end application, a buffer of raw pulse-code modulated (PCM) data based on decoding at least a part of the compressed audio file,wherein the viseme is scheduled to align with a corresponding phoneme.2. The method of further comprising storing the viseme and corresponding intensity pair in an intermediary file.3. The method of claim 2 , wherein the viseme and corresponding intensity pair is stored in the intermediary file with specific timings for the viseme and corresponding intensity.4. The method of claim 1 , wherein the frequency of decoding the compressed audio file is based on the size of the buffer.5. The method of further comprising feeding claim 1 , by the server claim 1 , the PCM data to a user equipment.6. The method of further comprising transmitting claim 1 , by the server claim 1 , the PCM data to a user equipment upon request.7. The method of ...

Подробнее
18-03-2021 дата публикации

Generating and rendering motion graphics effects based on recognized content in camera view finder

Номер: US20210084233A1
Принадлежит: Google LLC

Systems and methods are described for providing co-presence in an augmented reality environment. The method may include receiving a visual scene within a viewing window depicting a multi-frame real-time visual scene captured by a camera onboard an electronic device associated with the augmented reality environment, identifying a plurality of elements of the visual scene, detecting at least one graphic indicator associated with at least one of the plurality of elements, detecting at least one boundary associated with the at least one element, and generating, in the viewing window and based on the detection of the at least one graphic indicator, Augmented Reality (AR) motion graphics within the detected boundary. In response to determining that content related to the at least one element is available, the method may include retrieving the content and visually indicating an AR tracked control on the at least one element within the viewing window.

Подробнее
24-03-2016 дата публикации

SYSTEMS AND METHODS FOR THE CONVERSION OF IMAGES INTO PERSONALIZED ANIMATIONS

Номер: US20160086365A1
Принадлежит: Weboloco Limited

Systems and methods for converting an image into an animated image or video, including: an algorithm for receiving the image from a user via an electronic device; an algorithm for applying a selected template to the image, wherein the selected template imparts selected portions of the image with motion or overlays selected objects on the image, thereby providing an animated image or video; and an algorithm for displaying the animated image or video to the user via the electronic device. The applying the selected template to the image is performed by software resident on the electronic device or remote from the electronic device. 1. A method for converting an image into an animated image or video , comprising:receiving the image from a user via an electronic device;applying a selected template to the image, wherein the selected template imparts selected portions of the image with motion or overlays selected objects on the image, thereby providing an animated image or video; anddisplaying the animated image or video to the user via the electronic device.2. The method of claim 1 , wherein the applying the selected template to the image is performed by software resident on the electronic device or remote from the electronic device.3. The method of claim 1 , wherein the electronic device comprises one of a personal computer (PC) claim 1 , a tablet computer claim 1 , a smartphone claim 1 , a web access device claim 1 , and a cloud access device.4. The method of claim 1 , wherein the selected template comprises a plurality of templates that form a “story.”5. The method of claim 1 , wherein the applying the selected template to the image comprises identifying one or more key features in the image.6. The method of claim 1 , wherein the applying the selected template to the image comprises extracting one or more key features from the image.7. The method of claim 1 , wherein the applying the selected template to the image comprises manipulating one or more key features from ...

Подробнее
12-03-2020 дата публикации

Method and System for Virtual Sensor Data Generation with Depth Ground Truth Annotation

Номер: US20200082622A1
Принадлежит:

Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment. 1. A method , comprising: generating, by a processor, a virtual environment with a virtual sensor therein;', 'positioning, by the processor, the virtual sensor on a mobile virtual object in the virtual environment; and', 'generating, by the processor, simulation-generated data characterizing the virtual environment as perceived by the virtual sensor as the mobile virtual object and the virtual sensor move around in the virtual environment,, 'performing a process to develop, test or train a computer vision detection algorithm by modeling a real-word environment with a virtual environment, the process comprisingwherein the simulation-generated data represents information collected by one or more real-word sensors in the real-word environment.2. The method of claim 1 , wherein the virtual environment comprises a plurality of virtual objects distributed therewithin claim 1 , each of the virtual objects either stationary or mobile relative to the virtual sensor claim 1 , and each of the virtual objects sensible by the virtual sensor.3. The method of claim 2 , wherein the spatial relationship comprises distance information of one or more of the plurality of virtual objects with respect to the virtual sensor.4. The method of claim 2 , wherein the virtual sensor comprises a virtual ...

Подробнее
19-06-2014 дата публикации

Display apparatus and method for processing image thereof

Номер: US20140168252A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A display apparatus and an image processing method thereof are provided. The method includes displaying an image on a display screen, acquiring weather information through at least one of a sensor and an external server, and providing an image effect corresponding to the weather information to the displayed image by processing the image based on the acquired weather information.

Подробнее
19-06-2014 дата публикации

IMAGE DEFORMATION METHOD AND APPARATUS USING DEFORMATION AXIS

Номер: US20140168270A1

The present invention relates to an image deformation method. An image deformation method using a deformation axis according to the present invention includes deforming the deformation axis based on deformation energy of points according to a deformation of at least one deformation axis including a plurality of points predetermined with respect to an image to be deformed; and deforming the image using a plurality of segments of the deformation axis divided based on points of the deformed deformation axis. According to the present invention, an image deformation method using a deformation axis is performed based on a freeform deformation axis (FDA) that is independent from a type of an original object and thus, may be more advantageous and may be utilized in combination with various types of deformation methods. Deformation of an image may be performed intuitively and in real time and thus, may be easily used by general users. 1. An image deformation method using a deformation axis , the method comprising:deforming the deformation axis based on deformation energy of points according to a deformation of at least one deformation axis including a plurality of points predetermined with respect to an image to be deformed; anddeforming the image using a plurality of segments of the deformation axis divided based on points of the deformed deformation axis.2. The method of claim 1 , wherein the deforming of the deformation axis deforms the deformation axis based on the deformation energy of points calculated using a length of a segment of the deformation axis divided based on the plurality of points.3. The method of claim 2 , wherein the deforming of the deformation axis deforms the deformation axis based on the deformation energy of points calculated using Laplacian coordinates about the plurality of points.4. The method of claim 1 , wherein the deforming of the deformation axis deforms the deformation axis to minimize the deformation energy of points in the case of ...

Подробнее
31-03-2022 дата публикации

User interfaces for media capture and management

Номер: US20220103758A1
Принадлежит: Apple Inc

The present disclosure generally relates to user interface for capturing and managing media (e.g., photo media, video media). In some examples, user interfaces for managing the file format of media (e.g., photo, video media) are described. In some examples, user interfaces for storing media (photo media (e.g., a sequences of image, a single still image), video media) are described.

Подробнее
21-03-2019 дата публикации

Devices, Methods, and Graphical User Interfaces for Messaging

Номер: US20190087082A1
Принадлежит:

An electronic device with improved methods and interfaces for messaging displays a messaging user interface that includes a conversation transcript of a messaging session between a user of the electronic device and at least one other user. A first message that includes foreign language text is received from a remote device that corresponds to another user included in the messaging session. In response to receiving the first message, the electronic device displays the first message in the conversation transcript. In response to detecting a first input at a location that corresponds to the foreign language text in the first message: in accordance with a determination that the first input meets translation criteria, the electronic device performs a foreign-language-text-translation action; and in accordance with a determination that the first input does not meet the translation criteria, the electronic device forgoes performance of the foreign-language-text-translation action. 1. A method , comprising: displaying a messaging user interface on the display, the messaging user interface including a conversation transcript of a messaging session between a user of the electronic device and at least one other user;', 'receiving a first message within the messaging session from an electronic device that corresponds to another user included in the messaging session, wherein the first message includes foreign language text;', 'in response to receiving the first message, displaying the first message in the conversation transcript;', 'detecting a first input at a location that corresponds to the foreign language text in the first message;', in accordance with a determination that the first input meets translation criteria, performing a foreign-language-text-translation action; and', 'in accordance with a determination that the first input does not meet the translation criteria, forgoing performance of the foreign-language-text-translation action., 'in response to detecting the ...

Подробнее
21-03-2019 дата публикации

Method for displaying an animation during the starting phase of an electronic device and associated electronic device

Номер: US20190087200A1
Автор: Julien Bellanger
Принадлежит: SAGEMCOM BROADBAND SAS

A method for displaying an animation by a display chip of an electronic device, which includes a non-volatile memory and a random-access memory. The display chip includes a video output register and a display register. The method includes a first static programming phase including configuring the video output register; writing n images in the memory, n being an integer higher than or equal to two; writing into the memory of a plurality of nodes, such that each node includes the address in the memory of at least one portion of an image, as well as the address of the following node in the memory, the last node including the address in the random-access memory of the first node; and configuring the display register. The method also includes a second phase in which the n images are read by the display chip by the display register, to display the animation.

Подробнее
05-05-2022 дата публикации

Method for displaying interactive content, electronic device, and storage medium

Номер: US20220137756A1
Автор: Xiao Tang

The present disclosure relates to a method for displaying interactive content, electronic device, storage medium and computer program product. The method includes: receiving user operation instruction for a created content; analyzing the user operation instruction; determining interactive content corresponding to the user operation instruction, where different user operation instructions correspond to different interactive contents; and displaying the interactive content, where the interactive content is used for indicating preference level of a user to the created content, and different interactive contents indicate different preference levels.

Подробнее
30-03-2017 дата публикации

PROGRAMMABLE SYMBOL ANIMATION PRE-PROCESSOR FOR BUILDING AUTOMATION GRAPHICS

Номер: US20170090680A1
Принадлежит: SIEMENS SCHWEIZ AG

Management systems, methods and mediums are provided for displaying graphics using a programmable symbol animation pre-processor. One method includes identifying a symbol associated with a building graphic, identifying a symbol property to be animated, and determining whether the symbol property is associated with a script. When it is, identifying a plurality of different data points referenced in the script where each data point corresponds to the same device or a respective device in the building. The method identifies a respective value for each identified data point as received from a management system operably connected to each of the plurality of devices, identifies an operation in the script that corresponds to an evaluation of the values of the identified data points, generates a first evaluation result based on the operation, and displays a graphical representation of the symbol based on the first evaluation result and in association with the building graphic. 1. A method in a data processing system for displaying graphics , the method comprising:Identifying a symbol associated with a building graphic;Identifying a property of the symbol to be animated;Determining whether the identified symbol property is associated with a script; identifying a plurality of different data points referenced in the script, each data point corresponding to a same one or a respective one of a plurality of devices in a building represented by the building graphic;', 'identifying a respective value for each identified data point, each value received from a management system operably connected to each of the plurality of devices;', 'identifying an operation in the script that corresponds to an evaluation of the values of the identified data points;', 'generating a first evaluation result based on the operation; and', 'displaying a graphical representation of the symbol based on the first evaluation result and in association with the building graphic., 'in response to determining ...

Подробнее
05-05-2022 дата публикации

SYSTEMS AND METHODS FOR INTEGRATING AND USING AUGMENTED REALITY TECHNOLOGIES

Номер: US20220139049A1
Принадлежит:

The present disclosure generally relates to systems and methods for creating, publishing, accessing, and sharing AR, VR, and/or XR content. In embodiments, users may collaborate in an XR environment. In embodiments, a system disclosed herein includes a backend module and a user client that permits creation and/or viewing of XR content. Embodiments enable users to create customized XR content that is published to users based on predetermined times and/or locations. Embodiments provide for training and collaborative XR environments accessed by multiple users simultaneously.

Подробнее
30-03-2017 дата публикации

PERCEPTUAL COMPUTING INPUT TO DETERMINE POST-PRODUCTION EFFECTS

Номер: US20170092321A1
Автор: Anderson Glen J.
Принадлежит: Intel Corporation

Systems, apparatuses and methods may provide for detecting an event in visual content including one or more of a video or a still image and searching an effects database for a post-production effect that corresponds to the event. Additionally, the post-production effect may be automatically added to the visual content. In one example, adding the post-production effect includes adjusting one or more of a display backlight level or a display position setting of a device that presents the visual content. 1. A system comprising:a display to present visual content including one or more of a video or a still image;an effects database; and an event manager to detect an event in the visual content and search the effects database for a post-production effect that corresponds to the event, and', 'a device manager to automatically add the post-production effect to the visual content., 'a perceptual effects apparatus including,'}2. The system of claim 1 , wherein the device manager includes one or more of a backlight controller to adjust a backlight level of the display or a position controller to adjust a position setting of the display.3. The system of claim 2 , wherein the display position setting is to be adjusted to create a screen-shake effect.4. The system of claim 1 , wherein the event manager includes a sensor interface to receive sensor data associated with a subject depicted in the visual content claim 1 , and wherein the event is to be detected based on the sensor data.5. The system of claim 1 , wherein the event manager includes a content analyzer to conduct an analysis of the visual content claim 1 , and wherein the event is to be detected based on the analysis and the analysis includes one or more of a video analysis or an audio analysis.6. The system of claim 1 , further including a cartoon generator to convert the visual content into a cartoon before addition of the post-production effect to the visual content.7. An apparatus comprising:an event manager to ...

Подробнее
26-06-2014 дата публикации

Apparatus for simultaneously storing area selected in image and apparatus for creating an image file by automatically recording image information

Номер: US20140176566A1
Автор: Soo-ho Cho
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus to collectively store areas selected in an image includes an image-editing unit to load a standard image file, to display a standard image based on the standard image file, and to enable a user to edit the standard image, a zooming unit to zoom into and away from a position where a marker of an input unit is indicating on the standard image, and a selected-image-managing unit to collectively store one or more areas selected by the input unit as one or more corresponding image files.

Подробнее
26-06-2014 дата публикации

PRIORITIZED RENDERING OF OBJECTS IN A VIRTUAL UNIVERSE

Номер: US20140176567A1

Approaches for prioritized rendering of objects in a virtual universe are provided. In one embodiment, there is a prioritization tool containing a plurality of components configured to: determine a priority of each of a set of objects in a commercial area of the virtual universe, the commercial area having a plurality of virtual retail stores; assign a priority to each of the plurality of virtual stores in the commercial area based on the priority of each of the set of objects in the virtual universe; and download and cache each of the objects from the set of virtual stores from the plurality of virtual stores in the virtual universe, that are outside a rendering radius of the avatar, based on the relative priorities of each of the set of the plurality of virtual stores in the virtual universe. 1. A method for prioritized rendering of objects in a virtual universe , comprising:determining a priority of each of a set of objects in a commercial area of the virtual universe, the commercial area having a plurality of virtual retail stores;assigning a priority to each of the plurality of virtual stores in the commercial area based on the priority of each of the set of objects in the virtual universe;determining a rendering radius of the avatar traversing the commercial area of the virtual universe;identifying a set of virtual stores from the plurality of virtual stores in the virtual universe that is outside the rendering radius of the avatar; anddownloading and caching within a cache, each of the objects from the set of virtual stores from the plurality of virtual stores in the virtual universe that are outside the rendering radius of the avatar based on the relative priorities of each of the set of the plurality of virtual stores in the virtual universe that are outside the rendering radius of the avatar, wherein each of the objects from a first virtual store of the set of virtual stores from the plurality of virtual stores that is outside of the rendering radius of ...

Подробнее