Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 157. Отображено 157.
05-09-2017 дата публикации

Mixed reality interactions

Номер: US0009754420B2

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

Подробнее
25-12-2014 дата публикации

GESTURE TOOL

Номер: US20140380254A1
Принадлежит: Microsoft Technology Licensing LLC

Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.

Подробнее
02-12-2010 дата публикации

Gesture Coach

Номер: US20100306712A1
Принадлежит: Microsoft Corporation

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.

Подробнее
15-12-2011 дата публикации

CONTEXTUAL TAGGING OF RECORDED DATA

Номер: US20110304774A1
Принадлежит: MICROSOFT CORPORATION

Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.

Подробнее
21-06-2012 дата публикации

INTELLIGENT GAMEPLAY PHOTO CAPTURE

Номер: US20120157200A1
Принадлежит: MICROSOFT CORPORATION

Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.

Подробнее
02-12-2010 дата публикации

Gesture Shortcuts

Номер: US20100306714A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.

Подробнее
01-08-2013 дата публикации

MULTIPLAYER GAMING WITH HEAD-MOUNTED DISPLAY

Номер: US20130196757A1
Принадлежит: MICROSOFT CORPORATION

A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game. 1. A method for inviting a potential player to participate in a multiplayer game with a user , the multiplayer game displayed by a display of a user head-mounted display device , comprising:receiving user voice data from the user;determining that the user voice data is an invitation to participate in the multiplayer game;receiving eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data;associating the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data;matching a potential player account with the potential player;receiving an acceptance response from the potential player; andjoining the potential player account with a user account associated with the user in participating ...

Подробнее
30-07-2015 дата публикации

SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD

Номер: US20150212585A1
Принадлежит: Microsoft Technology Licensing LLC

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.

Подробнее
29-03-2016 дата публикации

Show body position

Номер: US0009298263B2

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.

Подробнее
08-05-2014 дата публикации

USER AUTHENTICATION ON DISPLAY DEVICE

Номер: US20140125574A1
Принадлежит: Individual

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

Подробнее
16-07-2013 дата публикации

Standard Gestures

Номер: US0008487938B2

Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value.

Подробнее
09-03-2017 дата публикации

CHAINING ANIMATIONS

Номер: US20170069125A1
Принадлежит:

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate. 1. A method for chaining animations , the method comprising:receiving image data that is representative of captured motion;selecting a pre-canned animation; at least a first portion of the captured motion is represented by the pre-canned animation that replaces the first portion of the captured motion; and', 'at least a second portion of the captured motion is represented by an animation that corresponds to the captured motion; and, 'based at least in part on a transition point, generating a chained animation whereinrendering the chained animation, wherein the chained animation comprises a blending of the first and second portions.2. The method of claim 1 , wherein a parameter of the transition point is set based at least in part on a gesture difficulty.3. The method of claim 2 , wherein the rendering the chained animation is triggered in response to determining that the at least one parameter is satisfied.4. The method of claim 1 , wherein selecting the pre-canned animation comprises selecting a pre-canned animation ...

Подробнее
28-06-2012 дата публикации

INTERACTING WITH A COMPUTER BASED APPLICATION

Номер: US20120165096A1
Принадлежит: MICROSOFT CORPORATION

A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. 1. A method for interacting with a computer based application , comprising:performing the computer based application including interacting with one or more actively engaged users;automatically sensing one or more physical properties of one or more entities not actively engaged with the computer based application;determining that the one or more entities not actively engaged with the computer based application have performed a predetermined action;automatically changing a runtime condition of the computer based application in response to determining that one or more entities not actively engaged with the computer based application have performed the predetermined action; andautomatically reporting the changing of the runtime condition in a user interface of the computer based application.2. The method of claim 1 , wherein:the automatically sensing one or more physical properties includes sensing a depth image;the predetermined action is a gesture; andthe determining that the one or more entities not actively engaged with the computer based application have performed the predetermined action includes using the depth image to identify the gesture.3. The method ...

Подробнее
21-06-2012 дата публикации

DRIVING SIMULATOR CONTROL WITH VIRTUAL SKELETON

Номер: US20120157198A1
Принадлежит: MICROSOFT CORPORATION

Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.

Подробнее
11-02-2014 дата публикации

Method to control perspective for a camera-controlled computer

Номер: US0008649554B2

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
28-07-2015 дата публикации

User authentication on augmented reality display device

Номер: US0009092600B2

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

Подробнее
04-06-2015 дата публикации

CHAINING ANIMATIONS

Номер: US20150154782A1
Принадлежит:

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate. 1. A method for chaining animations , the method comprising:receiving image data that is representative of captured motion;selecting a pre-canned animation; andchaining an animation of the captured motion and the pre-canned animation by at least displaying the captured motion and the pre-canned animation in sequence, wherein chaining the animation of the captured motion and the pre-canned animation comprises blending the animation of the captured motion to the pre-canned animation or blending the pre-canned animation to the animation of the captured motion.2. The method in accordance with claim 1 , wherein selecting a pre-canned animation comprises selecting a pre-canned animation from a plurality of pre-canned animations.3. The method in accordance with claim 1 , wherein chaining the animation of the captured motion and the pre-canned animation comprises blending parameters of the captured motion to at least one of initial parameters of the pre-canned animation or ending parameters of the pre-canned animation.4. The ...

Подробнее
21-06-2012 дата публикации

FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON

Номер: US20120155705A1
Принадлежит: MICROSOFT CORPORATION

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.

Подробнее
28-10-2014 дата публикации

Virtual light in augmented reality

Номер: US0008872853B2

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
21-06-2012 дата публикации

SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD

Номер: US20120157203A1
Принадлежит: Microsoft Corporation

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control. 1. A data holding device holding instructions executable by a logic subsystem to:render a three-dimensional virtual gaming world for display on a display device;receive a virtual skeleton, including a plurality of joints, the plurality of joints including a left hand joint and a right hand joint, the virtual skeleton providing a machine readable representation of a human target observed with a three-dimensional depth camera;render a control cursor in the three-dimensional virtual gaming world for display on the display device, a screen space position of the control cursor tracking a position of the left hand joint or the right hand joint of the virtual skeleton as modeled from a world space position of a corresponding hand of the human target;lock the control cursor to an object in the three-dimensional virtual gaming world if a grab threshold of the object is overcome;when the control cursor is locked to the object, move the object with the control cursor such that the world space position, of the corresponding hand of the human target moves the object in the three-dimensional virtual gaming world; andunlock the control cursor from the object at a release position of the object within the three-dimensional virtual gaming world if a release threshold of the object is overcome.2. The data holding device of claim 1 , where world space parameters of the corresponding hand overcome the grab threshold of the object if the corresponding hand is closed by the human target.3. The data holding device of claim 1 , where world space parameters of the corresponding hand overcome the grab threshold of the object if tire screen space ...

Подробнее
15-12-2011 дата публикации

INTERACTING WITH USER INTERFACE VIA AVATAR

Номер: US20110304632A1
Принадлежит: MICROSOFT CORPORATION

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

Подробнее
28-06-2016 дата публикации

Show body position

Номер: US0009377857B2

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.

Подробнее
21-06-2012 дата публикации

MODELING AN OBJECT FROM IMAGE DATA

Номер: US20120154618A1
Принадлежит: MICROSOFT CORPORATION

A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape. 1. A method for constructing a virtual model of an object based on video of the object in motion , the method comprising:identifying in an image from the video a set of reference points on the object;for each reference point identified, observing a displacement of that reference point in response to a motion of the object;grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement; andfitting the grouped-together reference points to a shape.2. The method of claim 1 , wherein the shape comprises an ellipsoid.3. The method of claim 1 , wherein the shape is one of a plurality of shapes to which the grouped-together reference points are fit.4. The method of claim 1 , wherein the image comprises a rectangular array of pixels and encodes one or more of a brightness claim 1 , a color and a polarization state for each pixel.5. The method of claim 4 , wherein the image further encodes a depth coordinate for each pixel.6. The method of claim 1 , wherein said grouping together comprises grouping those reference points for which the observed displacement is within an interval of a predicted displacement claim 1 , and wherein the predicted displacement is predicted based on the common translational or rotational motion.7. The method of claim 1 , wherein said grouping together comprises forming a plurality of groups of the identified reference points claim 1 , wherein claim 1 , for each group claim ...

Подробнее
14-04-2015 дата публикации

Automatic depth camera aiming

Номер: US0009008355B2

Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.

Подробнее
29-11-2012 дата публикации

AVATARS OF FRIENDS AS NON-PLAYER-CHARACTERS

Номер: US20120302351A1
Принадлежит: MICROSOFT CORPORATION

In accordance with one or more aspects, for a particular user one or more other users associated with that particular user are identified based on a social graph of that particular user. An avatar of at least one of the other users is obtained and included as a non-player-character in a game being played by that particular user. The particular user can provide requests to interact with the avatar of the second user (e.g., calling out the name of the second user, tapping the avatar of the second user on the shoulder, etc.), these requests being invitations for the second user to join in a game with the first user. An indication of such an invitation is presented to the second user, which can, for example, accept the invitation to join in a game with the first user. 1. A method comprising:identifying, based on a social graph of a first user, one or more other users associated with the first user;obtaining, for at least one of the one or more other users, an avatar of the other user; andincluding, as non-player-characters in a game being played by the first user, the obtained avatars of each of the at least one of the one or more other users.2. A method as recited in claim 1 , the including comprising including the obtained avatars as non-player-characters cheering on an avatar of the first user in the game.3. A method as recited in claim 1 , the including comprising including one of the obtained avatars as a ghost avatar following a path in the game that the first user took during a previous playing of the game.4. A method as recited in claim 1 , the including comprising including multiple copies of an obtained avatar as a dead avatar at a location in the game where the obtained avatar died while the game was previously played by the other user having the obtained avatar.5. A method as recited in claim 1 , the first user being logged into an online gaming service claim 1 , and at least one of the one or more other users including a user that is not currently logged ...

Подробнее
15-09-2011 дата публикации

BIONIC MOTION

Номер: US20110221755A1
Принадлежит:

A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.

Подробнее
22-03-2016 дата публикации

Interacting with user interface via avatar

Номер: US0009292083B2

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

Подробнее
20-02-2014 дата публикации

AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES

Номер: US20140049558A1
Принадлежит:

Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element. 1. On a see-through display device comprising a see-through display and an outward-facing image sensor , a method for providing instructional information for control devices , the method comprising:acquiring an image of a scene viewable through the see-through display;detecting a control device in the scene;retrieving information pertaining to a function of an interactive element of the control device; anddisplaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element.2. The method of claim 1 , wherein the image comprises a graphical element related to the function of the interactive element claim 1 , the graphical element being displayed on the see-through display over the interactive element.3. The method of claim 1 , wherein the image comprises a text box having text information describing the interactive element.4. The method of claim 3 , further comprising receiving a selection of the text box claim 3 , and in response displaying additional information on the see-through display device.5. The method of claim 1 , wherein the image comprises an animation.6. The method of claim 1 , further comprising detecting a gaze of a user of the see-through display device at a selected interactive element of the ...

Подробнее
24-02-2015 дата публикации

Executable virtual objects associated with real objects

Номер: US0008963805B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
23-04-2015 дата публикации

Isolate Extraneous Motions

Номер: US20150110354A1
Принадлежит:

A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter. 1. A method for applying a filter representing an intended gesture comprising:receiving data captured by a camera, wherein the data is representative of a user's motion in a physical space;predicting the intended gesture from the data;selecting a first portion of the data that is applicable to the intended gesture; andapplying the filter representing the intended gesture to the first portion of the data and determining an output from base information representing the intended gesture, wherein the filter comprises the base information representing the intended gesture.2. The method of claim 1 , further comprising applying a plurality of filters to the data claim 1 , wherein the intended gesture is a gesture corresponding to at least one of the plurality of filters having base information that corresponds to the data.3. The method of claim 2 , further comprising generating a model of the user from the data claim 2 , wherein the model maps to the first portion of the data that is applicable to the intended gesture and comprises a pre-authored animation that represents a second portion of the data that is not applicable to ...

Подробнее
10-06-2014 дата публикации

Interacting with user interface via avatar

Номер: US0008749557B2

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

Подробнее
23-09-2010 дата публикации

CHAINING ANIMATIONS

Номер: US20100238182A1
Принадлежит: Microsoft Corporation

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.

Подробнее
26-07-2016 дата публикации

Gesture shortcuts

Номер: US0009400559B2

Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.

Подробнее
09-02-2016 дата публикации

Virtual object manipulation

Номер: US0009256282B2
Принадлежит: Microsoft Technology Licensing, LLC

Systems, methods and computer readable media are disclosed for manipulating virtual objects. A user may utilize a controller, such as his hand, in physical space to associate with a cursor in a virtual environment. As the user manipulates the controller in physical space, this is captured by a depth camera. The image data from the depth camera is parsed to determine how the controller is manipulated, and a corresponding manipulation of the cursor is performed in virtual space. Where the cursor interacts with a virtual object in the virtual space, that virtual object is manipulated by the cursor.

Подробнее
15-04-2014 дата публикации

Automated sensor driven match-making

Номер: US0008696461B2

A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.

Подробнее
20-06-2013 дата публикации

CONTENT SYSTEM WITH SECONDARY TOUCH CONTROLLER

Номер: US20130154958A1
Принадлежит: MICROSOFT CORPORATION

A controller for a content presentation and interaction system which includes a primary content presentation device. The controller includes a tactile control input and a touch screen control input. The tactile control input is responsive to the inputs of a first user and communicatively coupled to the content presentation device. The controller a plurality of tactile input mechanisms and provides a first set of the plurality of control inputs manipulating content. The controller includes a touch screen control input responsive to the inputs of the first user and communicatively coupled to the content presentation device. The second controller is proximate the first controller and provides a second set of the plurality of control inputs. The second set of control inputs includes alternative inputs for at least some of the controls and additional inputs not available using the tactile input mechanisms. 1. A controller for a content presentation and interaction system including a primary content presentation device , comprising:a tactile control input responsive to the inputs of a first user and communicatively coupled to the content presentation device, including a plurality of tactile input mechanisms and providing a first set of control inputs manipulating content;a touch screen control input responsive to the inputs of the first user and communicatively coupled to the content presentation device, the screen proximate the tactile control input and providing a second set of control inputs, the second set of control inputs including alternative inputs for at least some of the first set of control inputs and additional inputs not available using the tactile input mechanisms.2. The controller of wherein the controller communicates with the content presentation device and the content presentation device communicates with an entertainment service via a network claim 1 , the service providing one or more elements of a secondary interface claim 1 , the secondary interface ...

Подробнее
28-05-2013 дата публикации

Determine intended motions

Номер: US0008451278B2

It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.

Подробнее
20-12-2016 дата публикации

Method to control perspective for a camera-controlled computer

Номер: US0009524024B2

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
05-05-2015 дата публикации

Automated sensor driven friending

Номер: US0009025832B2

A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player.

Подробнее
02-12-2010 дата публикации

Gesture Tool

Номер: US20100306713A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.

Подробнее
05-08-2010 дата публикации

Standard Gestures

Номер: US20100194762A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value.

Подробнее
04-11-2010 дата публикации

ISOLATE EXTRANEOUS MOTIONS

Номер: US20100278393A1
Принадлежит: Microsoft Corporation

A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter.

Подробнее
01-12-2015 дата публикации

Executable virtual objects associated with real objects

Номер: US0009201243B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
27-11-2012 дата публикации

Physical characteristics based user identification for matchmaking

Номер: US0008317623B1

One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified.

Подробнее
21-11-2017 дата публикации

Driving simulator control with virtual skeleton

Номер: US0009821224B2

Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.

Подробнее
28-04-2011 дата публикации

DECORATING A DISPLAY ENVIRONMENT

Номер: US20110099476A1
Принадлежит: Microsoft Corporation

Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user.

Подробнее
02-10-2008 дата публикации

MULTI-TIER ONLINE GAME PLAY

Номер: US20080242421A1
Принадлежит: Microsoft Corporation

Online multiplayer games are provided in multiple tiers. A first tier offers limited features and a second tier offers first tier features plus additional features. The additional features are exclusive to the second tier. During game play in the first tier, enticements are provided to participate in the second tier. The first tier requires no subscription to participate therein. Participation in the second tier requires a subscription. In an example configuration, the first tier allows players to host a game on a dedicated server, browse a list of dedicated server games, and join a game from a list of first tier eligible games. And, the second tier offers a variety of additional features, such as the ability to invite other players to join a game session, match making services, and cross-platform game play.

Подробнее
03-01-2013 дата публикации

MATCHING USERS OVER A NETWORK

Номер: US20130007013A1
Принадлежит: MICROSOFT CORPORATION

Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking.

Подробнее
10-01-2013 дата публикации

PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING

Номер: US20130013093A1
Принадлежит: Microsoft Corporation

One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. 19-. (canceled)10. A method comprising:identifying a user of an online service;identifying multiple additional users that are friends of the user; andidentifying, by one or more devices and based at least in part on one or more physical characteristics of the user and one or more physical characteristics of at least one of the multiple additional users, at least one of the multiple additional users with which to share an online experience with the user.11. A method as recited in claim 10 , the one or more physical characteristics of the user including at least one physical characteristic that is detected during an initialization process and stored as associated with the user.12. A method as recited in claim 11 , the at least one physical characteristic that is detected during the initialization process being stored based on a user id used by the user with the online service.13. A method as recited in claim 10 , the shared online experience comprising playing a multi-player game.14. A method as recited in claim 13 , the multiple additional users comprising friends of the user that are already playing the multi-player game.15. A method as recited in claim 10 , the multiple additional users comprising friends of the user that are currently logged into the online service.16. A method as recited in claim 10 , the one or more physical characteristics of the user including one or more physical attributes of the user claim 10 , and the one or more physical characteristics of each of the at least one of the ...

Подробнее
05-01-2017 дата публикации

MIXED REALITY INTERACTIONS

Номер: US20170004655A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. 1. A mixed reality interaction system comprising:a head-mounted display device including a display system, and a camera; and identify a physical object in a mixed reality environment based on an image captured by the camera;', 'determine an interaction context for the identified physical object based on one or more aspects of the mixed reality environment;', 'programmatically select an interaction mode for the identified physical object based on the interaction context and a stored profile for the physical object;, 'a processor configured tointerpret a user input directed at the physical object correspond to a virtual action based on the selected interaction mode;execute the virtual action to modify an appearance of a virtual object associated with the physical object; anddisplay the virtual object via the head-mounted display device with the modified appearance.2. The mixed reality interaction system of claim 1 , wherein the processor is further configured to:present a first query to confirm an accuracy of an identity of the physical object; andin response to the query, ...

Подробнее
23-06-2015 дата публикации

Shared collaboration using display device

Номер: US0009063566B2

Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.

Подробнее
05-12-2017 дата публикации

Executable virtual objects associated with real objects

Номер: US0009836889B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
24-03-2015 дата публикации

Chaining animations

Номер: US0008988437B2

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.

Подробнее
13-12-2016 дата публикации

Isolate extraneous motions

Номер: US0009519828B2

A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter.

Подробнее
09-04-2013 дата публикации

Gesture coach

Номер: US0008418085B2

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.

Подробнее
07-10-2014 дата публикации

Gesture tool

Номер: US0008856691B2

Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.

Подробнее
06-12-2012 дата публикации

AUTOMATED SENSOR DRIVEN FRIENDING

Номер: US20120311031A1
Принадлежит: MICROSOFT CORPORATION

A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player. 1. A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends , the method comprising:recognizing the player;automatically identifying an observer within a threshold proximity to the player; andadding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player.2. The method of claim 1 , where automatically identifying the observer includes matching an observed skeletal model to a profile skeletal model included as part of a user profile saved in a network accessible database claim 1 , the observed skeletal model being derived from three dimensional depth information collected via a depth camera imaging the observer.3. The method of claim 1 , where automatically identifying the observer includes matching an observed voice pattern to a profile voice signature included as part of a user profile saved in a network accessible database claim 1 , the observed voice pattern being derived from audio recordings collected via a microphone listening to the observer.4. The method of claim 1 , where automatically identifying the observer includes matching an observed facial image to a profile face signature included as part of a user profile saved in a network accessible database claim 1 , the observed facial image being derived from a digital image collected via a camera imaging the observer.5. The method of claim 1 , where automatically identifying ...

Подробнее
06-10-2020 дата публикации

Adding attributes to virtual representations of real-world objects

Номер: US0010796494B2

A method, medium, and virtual object for providing a virtual representation with an attribute are described. The virtual representation is generated based on a digitization of a real-world object. Properties of the virtual representation, such as colors, shape similarities, volume, surface area, and the like are identified and an amount or degree of exhibition of those properties by the virtual representation is determined. The properties are employed to identify attributes associated with the virtual representation, such as temperature, weight, or sharpness of an edge, among other attributes of the virtual object. A degree of exhibition of the attributes is also determined based on the properties and their degrees of exhibition. Thereby, the virtual representation is provided with one or more attributes that instruct presentation and interactions of the virtual representation in a virtual world.

Подробнее
21-11-2017 дата публикации

Chaining animations

Номер: US0009824480B2

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.

Подробнее
31-10-2013 дата публикации

PROXIMITY AND CONNECTION BASED PHOTO SHARING

Номер: US20130286223A1
Принадлежит: MICROSOFT CORPORATION

Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared. 1. One or more computer-readable storage media having stored thereon multiple instructions that , when executed by one or more processors of a device , cause the one or more processors to:receive a photo captured at the device;determine one or more other devices in close proximity to the device;determine a connection between the device and at least one of the one or more other devices; andautomatically share the photo with the at least one of the one or more other devices.2. One or more computer-readable storage media as recited in claim 1 , the connection comprising claim 1 , for each of the one or more other devices claim 1 , a user of the other device being included in a social network of a user of the device.3. One or more computer-readable storage media as recited in claim 1 , the multiple instructions further causing the one or more processors to:receive, from one of the one or more other devices, an indication that a user of the other device has rejected the photo; andshare, in response to the indication, the photo with no other of the one or more other devices.4. One or more computer-readable storage media as recited in claim 1 , the multiple instructions further causing the one or more processors to associate one or more controls with the photo claim 1 , the one or more controls restricting how the photo is shared.5. One or more computer-readable storage media as recited in claim 4 , the controls indicating properties and/or securities that the device is to have in order for the photo to be shared with the device.6. One or more computer-readable ...

Подробнее
15-09-2011 дата публикации

INTERACTING WITH A COMPUTER BASED APPLICATION

Номер: US20110223995A1
Принадлежит: Microsoft Corp

A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.

Подробнее
28-08-2012 дата публикации

Determine intended motions

Номер: US0008253746B2

It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.

Подробнее
14-04-2015 дата публикации

Physical characteristics based user identification for matchmaking

Номер: US0009005029B2

One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified.

Подробнее
27-01-2015 дата публикации

Isolate extraneous motions

Номер: US0008942428B2

A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter.

Подробнее
24-09-2015 дата публикации

Integrated Interactive Space

Номер: US20150271449A1
Принадлежит: Microsoft Technology Licensing LLC

Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.

Подробнее
08-08-2013 дата публикации

INTEGRATED INTERACTIVE SPACE

Номер: US20130201276A1
Принадлежит: Microsoft Corporation

Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space. 1. A computer-implemented method , comprising:synchronizing a first camera at a first location and a second camera at a second location into a common reference system;generating an integrated interactive space using video data from the first camera and the second camera and based on the common reference system; andpresenting at least a portion of the integrated interactive space for display at one or more of the first location or the second location.2. A method as described in claim 1 , wherein the common reference system comprises a three-dimensional coordinate system in which images from the first location and the second location can be positioned.3. A method as described in claim 1 , wherein said synchronizing comprises:capturing, using the first camera and the second camera, images of fiducial markers placed at the first location and the second location;determining a position and orientation of the first camera and a position and orientation of the second camera by comparing attributes of the images of fiducial markers to known attributes of the fiducial markers; ...

Подробнее
06-12-2012 дата публикации

AUTOMATED SENSOR DRIVEN MATCH-MAKING

Номер: US20120309534A1
Принадлежит: MICROSOFT CORPORATION

A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria. 1. A method of matching a player of a multi-player game with a remote participant , the method comprising:recognizing the player;automatically identifying an observer within a threshold proximity to the player;using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game; andwhen selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.2. The method of claim 1 , where automatically identifying the observer includes matching an observed skeletal model to a profile skeletal model included as part of a user profile saved in a network accessible database claim 1 , the observed skeletal model being derived from three dimensional depth information collected via a depth camera imaging the observer.3. The method of claim 1 , where automatically identifying the observer includes matching an observed voice pattern to a profile voice signature included as part of a user profile saved in a network accessible database claim 1 , the observed voice pattern being derived from audio recordings collected via a microphone listening to the observer.4. The method of claim 1 , where automatically identifying the observer includes matching an observed facial image to a profile face signature included as part of a user profile saved in a network accessible database claim 1 , the observed facial image being derived from a digital image collected via a ...

Подробнее
06-12-2012 дата публикации

PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING

Номер: US20120309538A1
Принадлежит: Microsoft Corporation

One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified.

Подробнее
07-07-2015 дата публикации

Integrated interactive space

Номер: US0009077846B2

Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space.

Подробнее
08-06-2017 дата публикации

VIRTUAL LIGHT IN AUGMENTED REALITY

Номер: US20170161939A1
Принадлежит: Microsoft Technology Licensing, LLC

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. 1. A method for a head mounted display (HMD) , comprising:observing ambient lighting conditions of a physical environment;assessing a perspective of the HMD;visually augmenting an appearance of the physical environment with a virtual object; andcreating an illusion of a virtual shadow of the virtual object by virtually illuminating a non-virtual-object, non-shadow region bordering the virtual shadow, an appearance of the non-virtual-object, non-shadow region being based on the observed ambient lighting conditions.2. The method of claim 1 , wherein the virtual shadow is rendered with one or more omitted pixels.3. The method of claim 1 , wherein the non-shadow region is displayed with relatively more see-through lighting than the virtual shadow.4. The method of claim 1 , wherein the virtual shadow is positioned in accordance with an ambient lighting model describing ambient lighting conditions of the physical environment.5. A method for a head mounted display (HMD) claim 1 , comprising:observing ambient lighting conditions of a physical environment; andvisually augmenting an appearance of the physical environment with a virtual object and a graphical representation of a non-virtual-object, non-shadow region bordering a virtual shadow of the virtual object, an appearance of the graphical representation of the non-virtual-object, non-shadow region being based on the observed ambient lighting conditions.6. The method of claim 5 , wherein the virtual shadow is positioned in accordance with an ambient lighting model describing ambient lighting conditions of the physical environment.7. The method of claim 6 , wherein the ...

Подробнее
30-10-2014 дата публикации

MIXED REALITY INTERACTIONS

Номер: US20140320389A1
Принадлежит:

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. 1. A mixed reality interaction system for interacting with a physical object in a mixed reality environment , the mixed reality interaction system comprising:a head-mounted display device operatively connected to a computing device, the head-mounted display device including a display system for presenting the mixed reality environment and a plurality of input sensors including a camera for capturing an image of the physical object; anda mixed reality interaction program executed by a processor of the computing device, the mixed reality interaction program configured to:identify the physical object based on the captured image;determine an interaction context for the identified physical object based on one or more aspects of the mixed reality environment;query a stored profile for the physical object to determine a plurality of interaction modes for the physical object;programmatically select a selected interaction mode from the plurality of interaction modes based on the interaction context;receive a user input directed at the physical object via one of the input sensors of ...

Подробнее
02-12-2010 дата публикации

Gestures Beyond Skeletal

Номер: US20100306715A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

Подробнее
05-07-2016 дата публикации

Combining gestures beyond skeletal

Номер: US0009383823B2

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

Подробнее
22-06-2017 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20170178410A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A portable display device , comprising:one or more sensors including an image sensor;a logic subsystem; and receive sensor input from the image sensor,', 'determine whether a field of view includes a real object comprising an associated executable virtual object based at least on the sensor input,', 'based at least on a determination that the field of view includes the real object comprising the associated executable virtual object, determine an intent of a user to interact with the associated executable virtual object, and', 'based at least on a determination of the intent of the user to interact with the associated executable virtual object, launch the executable object., 'a data-holding subsystem holding instructions executable by the logic subsystem to'}2. The portable display device of claim 1 , where the executable virtual object is executable to display an image on the portable display device.3. The portable display device of claim 1 , where the executable virtual object is executable to present an audio content item.4. The portable display device of claim 1 , where the executable virtual object is executable to present an invitation to join an activity.5. The portable display device of claim 1 , wherein the image sensor is an outward-facing image sensor claim 1 , and ...

Подробнее
24-01-2017 дата публикации

Virtual light in augmented reality

Номер: US0009551871B2

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
03-11-2015 дата публикации

Proximity and connection based photo sharing

Номер: US0009179021B2

Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared.

Подробнее
04-11-2010 дата публикации

Method to Control Perspective for a Camera-Controlled Computer

Номер: US20100281439A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
29-11-2012 дата публикации

COMMUNICATION BETWEEN AVATARS IN DIFFERENT GAMES

Номер: US20120302350A1
Принадлежит: MICROSOFT CORPORATION

Synchronous and asynchronous communications between avatars is allowed. For synchronous communications, when multiple users are playing different games of the same game title and when the avatars of the multiple users are at the same location in their respective games they can communicate with one another, thus allowing the users of those avatars to communicate with one another. For asynchronous communications, an avatar of a particular user is left behind at a particular location in a game along with a recorded communication. When other users of other games are at that particular location, the avatar of that particular user is displayed and the recorded communication is presented to the other users.

Подробнее
25-10-2016 дата публикации

Chaining animations

Номер: US0009478057B2

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.

Подробнее
25-11-2014 дата публикации

Multiplayer game invitation system

Номер: US0008894484B2

A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in ...

Подробнее
22-11-2016 дата публикации

Altering a view perspective within a display environment

Номер: US0009498718B2

Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.

Подробнее
14-03-2017 дата публикации

Executable virtual objects associated with real objects

Номер: US0009594537B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
22-11-2012 дата публикации

DETERMINE INTENDED MOTIONS

Номер: US20120293518A1
Принадлежит: MICROSOFT CORPORATION

It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like. 1. A system for modifying data representative of captured motion , the method comprising:a processor; and receive image data of a scene, the image data including data representative of captured motion, the image data having been captured with a camera;', 'generate a model of the captured motion based on the image data;', 'modify at least a portion of a size of the model to correspond to a digital representation of the model; and', 'render an avatar using the digital representation., 'a memory communicatively coupled to the processor when the system is operational, the memory bearing processor-executable instructions that, when executed on the processor, cause the system to at least2. The system of claim 1 , wherein the captured motion corresponds to a first user and the image data includes data representative of a second captured motion of a second user claim 1 , and wherein the memory further bears processor-executable instructions that claim 1 , when executed on the processor claim 1 , cause the system to at least:generate a second model based of the second captured motion based on the image data;modify at ...

Подробнее
04-11-2010 дата публикации

SHOW BODY POSITION

Номер: US20100281432A1
Принадлежит:

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.

Подробнее
13-09-2016 дата публикации

Mixed reality interactions

Номер: US0009443354B2

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

Подробнее
04-11-2010 дата публикации

ALTERING A VIEW PERSPECTIVE WITHIN A DISPLAY ENVIRONMENT

Номер: US20100281438A1
Принадлежит: Microsoft Corporation

Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.

Подробнее
19-12-2017 дата публикации

Intelligent gameplay photo capture

Номер: US0009848106B2

Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.

Подробнее
29-12-2016 дата публикации

SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD

Номер: US20160378197A1
Принадлежит: Microsoft Technology Licensing, LLC

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control. 1. A data holding device holding instructions executable by a logic device , the instructions comprising:instructions to render a three-dimensional environment for display on a display device;instructions to receive a machine-readable virtual skeleton structurally representing a human sighted by a depth camera having a field of view, the virtual skeleton including a hand-joint position in three dimensions corresponding to a position of a hand of the human anywhere within the field of view;instructions to render a control cursor for display on the display device, a screen position of the control cursor moving based on corresponding movement of the hand-joint position;instructions to lock the control cursor to an object in the three-dimensional environment if a grab threshold of the object is overcome;instructions to, when the control cursor is locked to the object, change a screen position of the object based on corresponding movement of the hand-joint position, such that movement of the hand anywhere within the field of view effects a corresponding movement of the object in the three-dimensional environment; andinstructions to unlock the control cursor from the object at a release position within the three-dimensional environment if a release threshold of the object is overcome.2. The data holding device of claim 1 , where the grab threshold of the object is overcome if the hand is closed.3. The data holding device of claim 1 , where the grab threshold of the object is overcome if the control cursor is within a threshold distance of the object for a threshold duration.4. The data holding device of claim 1 , where the grab ...

Подробнее
06-03-2018 дата публикации

Method to control perspective for a camera-controlled computer

Номер: US0009910509B2

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
19-06-2014 дата публикации

Method to Control Perspective for a Camera-Controlled Computer

Номер: US20140168075A1
Принадлежит:

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control. 1. A method for changing a perspective of a virtual scene displayed on a display device , comprising:receiving data captured by a capture device, the capture device capturing movement or position of at least part of a user or an object controlled by the user;analyzing the data to determine that the user or the object moved in a direction; andin response to determining that the user or the object moved in the direction, modifying the perspective of the virtual scene displayed on the display device by moving the perspective of the virtual scene in the direction that the user or the object moved.2. The method of claim 1 , wherein analyzing the data to determine that the user or the object moved in the direction comprises determining that the user or the object moved to the user's left; andwherein modifying the perspective of the virtual comprises moving the perspective of the virtual scene to the user's left.3. The method of claim 1 , further comprising:magnifying a text displayed on the display device in response to determining that the user has moved away from the display device.4. The method of claim 3 , further comprising:maintaining a size of at least a portion of the virtual scene while magnifying the text.5. The method of claim 3 , further comprising:maintaining a size of a second text while magnifying the text.6. The method of claim 1 , further comprising:shrinking a text displayed on the display device in response to determining that the user has moved closer to the display device.7 ...

Подробнее
08-12-2011 дата публикации

AUTOMATIC DEPTH CAMERA AIMING

Номер: US20110299728A1
Принадлежит: MICROSOFT CORPORATION

Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.

Подробнее
06-10-2016 дата публикации

Combining Gestures Beyond Skeletal

Номер: US20160291700A1
Принадлежит:

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system. 1. A method for enabling a user to make hybrid-gesture input to an application , comprising:receiving first data representing movement or position of the user captured by a capture device;receiving additional data comprising a prop, clothing worn by the user, an object or scene captured by the capture device, user position data of a second user or users different from the user, a sound made by the user, a controller or remote control input, an amount and/or position of light in a scene, user interaction with a touch-sensitive device, or a facial expression of the user;combining the first data with the additional data to form a combined gesture including combined movements or positions of the combined center of mass of the user and the additional data, but not determining that the first data alone or the additional data alone indicates a likelihood that particular system-recognized input was performed by the user; andbased at least on determining that the combined gesture corresponds to a particular system-recognized input, sending an output to the application representative of the particular system-recognized input.2. The method of claim 1 , wherein the first data and the additional data are received as a result of the first data and the additional data being entered at substantially the same time.3. The method of claim 1 , further comprising analyzing the first data and the additional data with stacked gesture filters wherein at least one ...

Подробнее
22-05-2018 дата публикации

Multi-input user authentication on display device

Номер: US0009977882B2

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

Подробнее
23-06-2020 дата публикации

Combining gestures beyond skeletal

Номер: US0010691216B2

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

Подробнее
02-12-2010 дата публикации

Localized Gesture Aggregation

Номер: US20100306261A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for a localized gesture aggregation. In a system where user movement is captured by a capture device to provide gesture input to the system, demographic information regarding users as well as data corresponding to how those users respectively make various gestures is gathered. When a new user begins to use the system, his demographic information is analyzed to determine a most likely way that he will attempt to make or find it easy to make a given gesture. That most likely way is then used to process the new user's gesture input.

Подробнее
26-05-2015 дата публикации

Matching physical locations for shared virtual experience

Номер: US0009041739B2

Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.

Подробнее
11-01-2018 дата публикации

MIXED REALITY INTERACTIONS

Номер: US20180012412A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. 1. A mixed reality interaction system comprising:a head-mounted display device including a display system, and a camera; and identify a physical object in a mixed reality environment based on an image captured by the camera;', 'determine an interaction context for the identified physical object based on one or more aspects of the mixed reality environment;', 'programmatically select an interaction mode for the identified physical object based on the interaction context and a stored profile for the physical object;', 'interpret a user input directed at the physical object to correspond to a virtual action based on the selected interaction mode;', 'execute the virtual action to modify an appearance of a virtual object associated with the physical object; and', 'display the virtual object via the head-mounted display device with the modified appearance., 'a processor configured to2. The mixed reality interaction system of claim 1 , wherein the one or more aspects of the mixed reality environment include temporal data.3. The mixed reality interaction system of claim 1 , wherein ...

Подробнее
08-11-2016 дата публикации

Skeletal control of three-dimensional virtual world

Номер: US0009489053B2

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.

Подробнее
26-08-2014 дата публикации

Avatars of friends as non-player-characters

Номер: US0008814693B2
Принадлежит: Microsoft Corporation

In accordance with one or more aspects, for a particular user one or more other users associated with that particular user are identified based on a social graph of that particular user. An avatar of at least one of the other users is obtained and included as a non-player-character in a game being played by that particular user. The particular user can provide requests to interact with the avatar of the second user (e.g., calling out the name of the second user, tapping the avatar of the second user on the shoulder, etc.), these requests being invitations for the second user to join in a game with the first user. An indication of such an invitation is presented to the second user, which can, for example, accept the invitation to join in a game with the first user.

Подробнее
26-07-2016 дата публикации

Color vision deficit correction

Номер: US0009398844B2

Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user.

Подробнее
06-12-2012 дата публикации

EMOTION-BASED USER IDENTIFICATION FOR ONLINE EXPERIENCES

Номер: US20120311032A1
Принадлежит: MICROSOFT CORPORATION

Emotional response data of a particular user, when the particular user is interacting with each of multiple other users, is collected. Using the emotional response data, an emotion of the particular user when interacting with each of multiple other users is determined. Based on the determined emotions, one or more of the multiple other users are identified to share an online experience with the particular user. 1. A method comprising:determining, for each of multiple other users, an emotion of a first user when interacting with the other user; andidentifying, based at least in part on the determined emotions, one or more of the multiple other users to share an online experience with the first user.2. A method as recited in claim 1 , further comprising:generating, based on the determined emotions, a score for each of the multiple other users; andpresenting identifiers of one or more of the multiple other users having the highest scores.3. A method as recited in claim 1 , the determining comprising determining the emotion of the first user based on emotional responses of the first user during interaction of the first user with the other user during another online experience with the other user.4. A method as recited in claim 1 , the determining comprising determining the emotion of the first user based on emotional responses of the first user during interaction of the first user with the other user during an in-person experience with the other user.5. A method as recited in claim 1 , the determining comprising determining the emotion of the first user based on data indicating emotional responses of the first user in communications between the first user and the other user.6. A method as recited in claim 1 , the determining comprising determining claim 1 , for each of multiple types of experiences with each of multiple other users claim 1 , an emotion of the first user when interacting with the other user with the type of experience claim 1 , the identifying comprising ...

Подробнее
27-12-2012 дата публикации

Directed Performance In Motion Capture System

Номер: US20120326976A1
Принадлежит: MICROSOFT CORPORATION

Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person. 1. A motion capture system , comprising:a depth camera system, the depth camera system obtains images of a field of view;a display; anda processor in communication with the depth camera system and the display, the processor executes instructions to:display a virtual space comprising an avatar on the display, provide directions to a person, the person performs movements in the field of view in a first time period in response to the directions, process the images to detect the movements of the person, update the virtual space so that the avatar provides a performance, the avatar exhibits a trait and moves correspondingly to the movements of the person in real time as the person performs the movements in the performance, and provide a play back of the performance in a second time period, the avatar exhibits a modification to the trait and moves correspondingly to the movements of the person in the play back of the performance.2. The motion capture system of claim 1 , wherein:the trait comprises a costume of the avatar.3. The motion capture system of claim 2 , wherein:the costume of the avatar is ...

Подробнее
21-02-2013 дата публикации

PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE

Номер: US20130044130A1
Принадлежит:

The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. 1. One or more processor-readable storage devices having instructions encoded thereon for causing one or more software controlled processors to execute a method for providing contextual personal information by a mixed reality display device system , the method comprising:receiving and storing person selection criteria associated with a user wearing the mixed reality display device system;sending a request including a location of the user and the person selection criteria to a personal information service engine executing on one or more remote computer systems for a personal identification data set for each person sharing the location and satisfying the person selection criteria;receiving at least one personal identification data set from the personal identification service engine for a person sharing the location;determining whether the person associated with the at least one personal identification data set is in the field of view of the mixed reality display device system; andresponsive to the person associated with the at least one personal identification data set being in the field of view, ...

Подробнее
21-03-2013 дата публикации

Recognizing User Intent In Motion Capture System

Номер: US20130074002A1
Принадлежит: MICROSOFT CORPORATION

Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. 1. Tangible computer readable storage device having computer readable software embodied thereon for programming a processor to perform a method for recognizing an intent of a person to engage with an application in a motion capture system , the method comprising:receiving images of a field of view of the motion capture system;based on the images, distinguishing a person's body;based on the distinguishing, determining a probabilistic measure of an intent by the person to engage with the application;based on the probabilistic measure of the intent by the person to engage with the application, determining that the person does not intend to engage with the application at a first time and determining that the person intends to engage with the application at a second time;in response to determining that the person intends to engage with the application, allowing the person to engage with the application by automatically associating a profile and an avatar with the person in the application, and displaying the avatar in a virtual space on a display; andupdating the display by controlling the avatar as the person engages with the application by moving the ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20130083003A1
Принадлежит:

The technology described herein incudes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. 1. A method for presenting a personalized experience using a personal A/V apparatus , comprising:automatically determining a three dimensional location of the personal A/V apparatus, the personal A/V apparatus includes one or more sensors and a see-through display;automatically determining an orientation of the personal A/V apparatus;automatically determining a gaze of a user looking through the see-through display of the personal A/V apparatus;automatically determining a three dimensional location of a movable object in the field of view of the user through the see-through display, the determining of the three dimensional location of the movable object is performed using the one or more sensors;transmitting the three dimensional location of the personal A/V apparatus, the orientation, the gaze and the three dimensional location of the movable object to a server system;accessing weather data at the server system and automatically determining the effects of weather on the movement of the movable object;accessing course data at the server system;accessing the user's profile at the server system, the user's profile including information about the user's skill and past performance;automatically determining a recommend action on the movable object base on the three dimensional location of the movable object, the weather data and the course data;automatically adjusting the recommendation based on the user's skill and past performance;transmitting the adjusted recommendation to the personal A/V apparatus; anddisplaying the adjusted recommendation in the see-through display of the personal A/V apparatus.2. The method of claim 1 , further comprising:automatically tracking the movable object after the user ...

Подробнее
04-04-2013 дата публикации

CHANGING EXPERIENCE USING PERSONAL A/V SYSTEM

Номер: US20130083007A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user within a particular area;acquiring a user profile associated with the user;determining an enhancement package based on the user profile, the enhancement package includes one or more virtual objects that have not been previously viewed by the user;determining that the user is in a particular physiological state;adapting the one or more virtual objects based on the particular physiological state; anddisplaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular area.2. The method of claim 1 , further comprising:receiving and storing feedback from the user regarding the enhancement package, the user profile is updated to reflect the feedback from the user.3. The method of claim 1 , wherein:the adapting the one or more virtual objects includes substituting the one or more virtual objects with one or more different ...

Подробнее
04-04-2013 дата публикации

ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM

Номер: US20130083008A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user within a particular area;acquiring a user profile associated with the user;determining an enhancement package based on the user profile, the enhancement package includes one or more virtual objects that have not been previously viewed by the user;determining that the user is in a particular physiological state;adapting the one or more virtual objects based on the particular physiological state; anddisplaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular area.2. The method of claim 1 , further comprising:receiving and storing feedback from the user regarding the enhancement package, the user profile is updated to reflect the feedback from the user.3. The method of claim 1 , wherein:the adapting the one or more virtual objects includes substituting the one or more virtual objects with one or more different ...

Подробнее
04-04-2013 дата публикации

EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20130083009A1
Принадлежит:

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. 1. A method for presenting a personalized experience using a personal see-through A/V apparatus , comprising:accessing a location of the personal see-through A/V apparatus;automatically determining an exercise routine for a user based on the location; andpresenting a virtual image in the personal see-through A/V apparatus based on the exercise routine.2. The method of claim 1 , wherein:the presenting a virtual image in the personal see-through A/V apparatus includes presenting an image of someone performing the exercise routine based on data for a past performance of the exercise routine so that the user can see the virtual image inserted into a real scene viewed through the personal see-through A/V apparatus as the user performs the exercise routine.3. The method of claim 1 , wherein the presenting a virtual image in the personal see-through A/V apparatus based on the exercise routine includes:augmenting scenery on a route of the exercise routine so that the user can see additional scenery inserted into real scenery viewed through the personal see-through A/V apparatus.4. The method of claim 1 , further comprising recording data for a user wearing the personal see-through A/V apparatus for a period of time in which the user is not exercising claim 1 , wherein:the automatically determining an exercise routine for a user further includes:accessing a fitness goal for the user for the period of time including the time during which the user actions were recorded:determining the exercise routine based on the recorded user actions for the user to meet the fitness goal.5. The method of claim 4 , wherein the ...

Подробнее
04-04-2013 дата публикации

REPRESENTING A LOCATION AT A PREVIOUS TIME PERIOD USING AN AUGMENTED REALITY DISPLAY

Номер: US20130083011A1
Принадлежит:

Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view. 1. A method for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye , augmented reality (AR) display of a personal audiovisual (A/V) apparatus comprising:automatically identifying the personal A/V apparatus is within the physical location based on location data detected by the personal A/V apparatus;automatically identifying one or more objects in a display field of view of the near-eye, augmented reality display based on a three dimensional mapping of objects in the physical location;identifying user input indicating selection of a previous time period; anddisplaying three-dimensional (3D) virtual data associated with the previous time period based on the one or more objects in the display field of view and based on a user perspective associated with the display field of view.2. The method of further comprising:identifying a change in the display field of view; andupdating the displaying of the 3D virtual data associated with the previous time period based on the change in the display field of view.3. The method of further ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM WITH HOLOGRAPHIC OBJECTS

Номер: US20130083018A1
Принадлежит:

A system for generating an augmented reality environment using state-based virtual objects is described. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond with a unique set of triggering events different from those of any other state. The set of triggering events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each state of the plurality of different states may be associated with a different 3-D model or shape. The plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more triggering probabilities associated with the set of triggering events. 1. A method for generating an augmented reality environment using a mobile device , comprising:acquiring a particular file of a predetermined file format, the particular file includes information associated with one or more virtual objects, the particular file includes state information for each virtual object of the one or more virtual objects, the one or more virtual objects include a first virtual object, the first virtual object is associated with a first state and a second state different from the first state, the first state is associated with one or more triggering events, a first triggering event of the one or more triggering events is associated with the second state;setting the first virtual object into the first state;detecting the first triggering event;setting the first virtual object into the second state in response to the detecting the first triggering event, the setting the first virtual object into the second state includes acquiring one or more new triggering events different from the one or more triggering events; andgenerating and ...

Подробнее
04-04-2013 дата публикации

PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION

Номер: US20130083062A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user of the mobile device within a particular waiting area of an attraction;acquiring virtual object information associated with the attraction, the virtual object information includes one or more virtual objects; andgenerating and displaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular waiting area;detecting the user exiting the particular waiting area; anddisabling the one or more virtual objects in response to the detecting the user exiting the particular waiting area.2. The method of claim 1 , further comprising:identifying an age associated with the user, the acquiring virtual object information includes acquiring virtual object information associated with the attraction based on the age of the user.3. The method of claim 2 , further comprising:acquiring an attraction placement test, the attraction ...

Подробнее
04-04-2013 дата публикации

Service Provision Using Personal Audio/Visual System

Номер: US20130083063A1
Принадлежит:

A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user. 1. A method for use of head-mounted display device worn by a service consumer , the method comprising:receiving image data of a scene from at least one forward-facing camera;communicating the image data of the scene to a computing device of a service provider, the service provider generating data based on the image data of the scene, to assist the service consumer in performing an activity in the scene;receiving the data generated by the service provider; andcontrolling an augmented reality projection system based on the data generated by the service provider to project at least one augmented reality image to the service consumer, to assist the service consumer in performing the activity.2. The method of claim 1 , further comprising:obtaining gaze direction data, the gaze detection data indicating an area of the scene at which the service consumer gazes; andcommunicating the gaze direction data to the computing device of the service provider, to identify, at the computing device of the service provider, the area of the scene at which the service consumer gazes.3. The method of claim 1 , ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL APPARATUS PROVIDING RESOURCE MANAGEMENT

Номер: US20130083064A1
Принадлежит:

Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display. 1. A method for providing resource management using one or more personal audiovisual (A/V) apparatus including a near-eye , augmented reality (AR) display comprising:automatically identifying a resource based on image data of the resource captured by at least one capture device of at least one personal A/V apparatus and object reference data;automatically tracking a three dimensional (3D) space position of the resource in a location identified based on location data detected by the at least one personal A/V apparatus;automatically determining a property of the resource based on the image data of the resource;automatically tracking the property of the resource; andautomatically causing display of image data related to the resource in the near-eye, augmented reality display based on a notification criteria for the property associated with the resource.2. The method of wherein the property associated with the resource comprises at least one of the following:a quantity;an expiration date;a physical damage indicator;a quality control indicator; anda nutritional value.3. The method of further comprising:generating and storing a ...

Подробнее
04-04-2013 дата публикации

VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS

Номер: US20130083173A1
Принадлежит:

Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. 1. A method for providing a virtual spectator experience of an event for viewing with a near-eye , augmented reality display of a personal audiovisual (A/V) apparatus comprising:receiving in real time one or more positions of one or more event objects participating in the event occurring at a first location remote from a second location;mapping the one or more positions of the one or more event objects in the first 3D coordinate system for the first location to a second 3D coordinate system for a second location remote from the first location;determining a display field of view of a near-eye, augmented reality display of a personal A/V apparatus being worn by a user at the second location; andsending in real time 3D virtual data representing the one or more event objects which are within the display field of view to the personal A/V apparatus at the second location.2. The method of wherein the near-eye claim 1 , augmented reality display is a near-eye claim 1 , see-through claim 1 , augmented reality display.3. The method of further comprising receiving in real time 3D virtual data of the one or more event objects which include ...

Подробнее
04-04-2013 дата публикации

Sharing Games Using Personal Audio/Visual Apparatus

Номер: US20130084970A1
Принадлежит:

A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space. 1. A method for sharing a game , comprising:defining a characteristic of a game using a sensor of a first head-mounted display device, the characteristic is defined with respect to a physical environment of a user of the first head-mounted display device; andsharing the game, including the characteristic of the game, with at least a user of a second head-mounted display device via a network.2. The method of claim 1 , wherein:the sensor captures an image of the physical environment; andthe image of the physical environment is used to provide a model of a game space of the game.3. The method of claim 1 , further comprising:identifying one or more other selected users with whom the game is to be shared, the sharing is responsive to the identifying.4. The method of claim 1 , wherein:the characteristic comprises a location of the user in the physical environment, a game space of the game is linked to the location.5. The method of claim 1 , wherein:the characteristic comprises a desired size of a game space of the game;the sensor determines a size of the physical environment of the user; andthe method performed further comprises determining whether the size ...

Подробнее
04-04-2013 дата публикации

Personal Audio/Visual System Providing Allergy Awareness

Номер: US20130085345A1
Принадлежит: Individual

A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user.

Подробнее
18-04-2013 дата публикации

ENHANCING A SPORT USING AN AUGMENTED REALITY DISPLAY

Номер: US20130095924A1
Принадлежит:

Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar. 1. A method for providing a personalized sport performance experience with three dimensional (3D) virtual data being displayed by a near-eye , augmented reality (AR) display of a personal audiovisual (A/V) apparatus comprising:automatically identifying a physical location which the personal A/V apparatus is within based on location data detected by the personal A/V apparatus;automatically identifying one or more 3D space positions of at least one or more sport objects in a sport performance area associated with the physical location based on a three dimensional mapping of objects in the sport performance area;accessing a memory for physical characteristics of a user and skills data for a sport stored for the user in user profile data;determining a physical movement recommendation by a processor for the user performing the sport based on the skills data for the sport, the physical characteristics of the user, and 3D space positions for at least the one or more sport objects; anddisplaying three-dimensional (3D) virtual data depicting one or more visual guides for ...

Подробнее
09-05-2013 дата публикации

SEE-THROUGH DISPLAY BRIGHTNESS CONTROL

Номер: US20130114043A1
Принадлежит:

The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes. 1. A method comprising:estimating a region at which a wearer of a see-through display is gazing using an eye-tracking camera;determining light intensity of the region at which the user is gazing; andadjusting brightness of the see-through display based on the light intensity of the region.2. The method of claim 1 , wherein the adjusting brightness of the see-through display based on the light intensity of the region includes:adjusting the opacity of the see-through display.3. The method of claim 1 , wherein the adjusting brightness of the see-through display based on the light intensity of the region includes:adjusting the intensity of light projected by the see-through display.4. The method of claim 1 , further comprising:determining a pupil size of the wearer, the adjusting brightness of the see-through display based on the light intensity of the region is further based on the pupil size of the wearer.5. The method of claim 4 , wherein the determining a pupil size of the wearer is performed using 3D imaging.6. The method of claim 1 , further comprising:determining a distance between the wearer's eyes and the see-through display based on 3D imaging, the adjusting brightness of the see-through display based is further based on the distance.7. The method of claim 1 , wherein the adjusting brightness of the see-through display is ...

Подробнее
23-05-2013 дата публикации

VIDEO COMPRESSION USING VIRTUAL SKELETON

Номер: US20130127994A1
Принадлежит:

Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. 1. A method for a computing system , comprising:receiving optical sensor information captured via one or more optical sensors, the optical sensor information imaging a scene including a human subject;processing the optical sensor information to model the human subject with a virtual skeleton;processing the optical sensor information to obtain surface information representing the human subject;transmitting the virtual skeleton to a remote computing device at a first frame rate; andtransmitting the surface information to the remote computing device at a second frame rate that is less than the first frame rate.2. The method of claim 1 , wherein the surface information includes visible spectrum information and/or depth information.3. The method of claim 1 , further comprising:identifying a high-interest region of the human subject; andprocessing the optical sensor information to obtain high-interest surface information representing the high-interest region of the human subject; andtransmitting the high-interest surface information to the remote computing device at a third frame rate that is greater than the second frame rate.4. The method of claim 3 , wherein the high-interest region of the human subject corresponds to a facial region of the human subject.5. The method of claim 3 , wherein the high-interest region of the human subject ...

Подробнее
30-05-2013 дата публикации

SHARED COLLABORATION USING HEAD-MOUNTED DISPLAY

Номер: US20130135180A1
Принадлежит:

Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user. 1. A shared collaboration system including a head-mounted display device operatively connected to a computing device , the head-mounted display device including a transparent display screen through which an active user may view a physical space , the shared collaboration system enabling the active user to interact with at least one additional user and with at least one collaboration item , the shared collaboration system comprising: receive observation information data representing the physical space from the head-mounted display device;', 'receive the at least one collaboration item;', 'receive additional user collaboration item input from the at least one additional user;', 'visually augment an appearance of the physical space as seen through the transparent display screen of the head-mounted display device to include an active user collaboration item representation; and', 'populate the active user collaboration item representation with the additional user collaboration item input from the at least one additional user., 'a collaboration engine program executed by a processor of the computing device, the collaboration engine program configured to2. The shared ...

Подробнее
30-05-2013 дата публикации

HEAD-MOUNTED DISPLAY BASED EDUCATION AND INSTRUCTION

Номер: US20130137076A1
Принадлежит:

Technology disclosed herein provides for use of HMDs in a classroom setting. Technology disclosed herein provides for HMD use for holographic instruction. In one embodiment, the HMD is used for social coaching. User profile information may be used to tailor instruction to a specific user based on known skills, learning styles, and/or characteristics. One or more individuals may be monitored based on sensor data. The sensor data may come from an HMD. The monitoring may be analyzed to determine how to enhance an experience. The experience may be enhanced by presenting an image in at least one head mounted display worn by the one or more individuals. 1. A method comprising:monitoring one or more individuals engaged in an experience, the monitoring is based on sensor data from one or more sensors;analyzing the monitoring to determine how to enhance the experience; andenhancing the experience based on the analyzing, the enhancing includes presenting a signal to at least one see-through head mounted display worn by the one or more individuals.2. The method of claim 1 , wherein the monitoring claim 1 , the analyzing claim 1 , and the enhancing include:detecting an eye gaze of a teacher using at least one of the sensors, the at least one sensor is part of a see-through HMD worn by the teacher;determining which of the one or more individuals the teacher is gazing at;determining information regarding the individual the teacher is gazing at; andproviding the information to a see-through HMD worn by the teacher.3. The method of claim 2 , wherein the monitoring claim 2 , the analyzing claim 2 , and the providing include:collecting biometric information about the individual the teacher is gazing at using at least one of the sensors;determining success, comprehension and/or attention of the individual the teacher is gazing at based on the biometric information; andreporting the success, comprehension and/or the attention of individual the teacher is gazing in the see-through HMD ...

Подробнее
06-06-2013 дата публикации

AUGMENTED REALITY WITH REALISTIC OCCLUSION

Номер: US20130141419A1
Принадлежит:

A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display. 1. A method of augmenting reality , the method comprising:receiving first observation information of a first physical space from a first head-mounted display device, the first head-mounted display device including a first see-through display configured to visually augment an appearance of the first physical space to a user viewing the first physical space through the first see-through display;receiving second observation information of a second physical space from a second head-mounted display device, the second head-mounted display device including a second see-through display configured to visually augment an appearance of the second physical space to a user viewing the second physical space through the second see-through display;mapping a shared virtual reality environment to the first physical space and the second physical space based on the first observation information and the second observation information, the shared virtual reality environment including a virtual object;sending first augmented reality display information to the first head mounted display, the first augmented reality display information configured to display the virtual object via the first see-through display with occlusion relative to a real world object from a perspective of the first see-through display.2. The method of claim 1 , where the first physical space and the second physical space are congruent claim 1 , and where the first observation information is from a first perspective of the first see-through display and the second observation information is from a second perspective of the second see-through display claim 1 , the first perspective ...

Подробнее
06-06-2013 дата публикации

VIRTUAL LIGHT IN AUGMENTED REALITY

Номер: US20130141434A1
Принадлежит:

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. 1. A method for a computing device , comprising:receiving optical sensor information output by an optical sensor system observing a physical environment, the optical sensor system forming a sensory component of a head-mounted display system;receiving position sensor information output by a position sensor system indicating a perspective of the optical sensor system within the physical environment, the position sensor system forming another sensory component of the head-mounted display system;creating an ambient lighting model from the optical sensor information and the position sensor information, the ambient lighting model describing ambient lighting conditions of the physical environment;modeling the physical environment from the optical sensor information and the position sensor information to create a virtual environment;applying the ambient lighting model to the virtual environment including an added virtual object not present in the physical environment to obtain an illuminated virtual object; andrendering a graphical representation of the illuminated virtual object for presentation via a see-through display of the head-mounted display system, the see-through display configured to visually augment an appearance of the physical environment to a user viewing the physical environment through the see-through display.2. The method of claim 1 , further comprising:applying the ambient lighting model to the virtual environment to obtain a virtual shadow of the illuminated virtual object projected on a virtual surface within the virtual environment; andrendering a graphical representation of a non-shadow region of the ...

Подробнее
11-07-2013 дата публикации

GENERATING METADATA FOR USER EXPERIENCES

Номер: US20130177296A1
Принадлежит:

A system and method for efficiently managing life experiences captured by one or more sensors (e.g., video or still camera, image sensors including RGB sensors and depth sensors). A “life recorder” is a recording device that continuously captures life experiences, including unanticipated life experiences, in image, video, and/or audio recordings. In some embodiments, video and/or audio recordings captured by a life recorder are automatically analyzed, tagged with a set of one or more metadata, indexed, and stored for future use. By tagging and indexing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later. 1. A method for managing data captured by a recording device , comprising:acquiring a recording of user experiences captured throughout one or more days by the recording device;generating context information, the context information including information associated with a user of the recording device, the context information including information associated with the recording device, the context information generated by one or more sensors;identifying a particular situation from the recording;detecting a tag event, the step of detecting includes automatically determining whether one or more rules associated with the recording device are satisfied by the context information and the particular situation, said one or more rules are configured for determining when to generate a set of one or more metadata tags for the recording;automatically generating a set of one or more metadata tags for the recording responsive to the step of detecting, each of the one or more metadata tags including one or more keywords that describe the recording related to a location associated with the recording device, a timestamp associated with the recording, an event associated with the user, and/or a situation associated with the recording, the set ...

Подробнее
01-08-2013 дата публикации

Executable virtual objects associated with real objects

Номер: US20130194164A1
Принадлежит: Individual

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
01-08-2013 дата публикации

Coordinate-system sharing for augmented reality

Номер: US20130194304A1
Принадлежит: Individual

A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.

Подробнее
01-08-2013 дата публикации

MATCHING PHYSICAL LOCATIONS FOR SHARED VIRTUAL EXPERIENCE

Номер: US20130196772A1
Принадлежит:

Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users. 1. A method for matching participants in a virtual multiplayer entertainment experience , the method comprising:receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience;receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located; andmatching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.2. The method of claim 1 , further comprising sending additional instructions to at least one of the two or more users to alter one or more characteristics of that user's physical space.3. The method of claim 2 , wherein sending additional instructions to at least one of the two or more users to alter one or more characteristics of that user's physical space further comprises sending instructions to at least one of the two or more users to move to a different physical space.4. The method of claim 1 , wherein each of the plurality of users are located in different physical spaces claim 1 , and wherein matching two or more users of the plurality of users further comprises matching two or more users based on a degree of similarity of characteristics of the physical spaces of the users.5. ...

Подробнее
22-08-2013 дата публикации

THREE-DIMENSIONAL PRINTING

Номер: US20130215454A1
Принадлежит: MICROSOFT CORPORATION

Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. 1. A system comprising:a three-dimensional printer having a three-dimensional printing mechanism that is configured to form a physical object in three dimensions; anda computing device communicatively coupled to the three-dimensional printer, the computing device including a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device.2. A system as described in claim 1 , wherein the three-dimensional printing mechanism is configured to place preconfigured components within the object as part of forming the object.3. A system as described in claim 2 , wherein the preconfigured component is a processing system and the three-dimensional printing module is configured to program the processing system to perform one or more operations.4. A system as described in claim 3 , wherein the processing system of the object is configured to communicate a result of performance of the one or more operations to the computing device for further processing by the computing device.5. A system as described in claim 4 , wherein the processing system is programming to process signals received from one or more other preconfigured components of the object that are configured as sensors.6. A system as ...

Подробнее
31-10-2013 дата публикации

DISPLAYING A COLLISION BETWEEN REAL AND VIRTUAL OBJECTS

Номер: US20130286004A1
Принадлежит:

Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions. 1. A method for displaying a collision between a real object and a virtual object by an augmented reality display device system comprising:identifying a collision between a real object and a virtual object in a display field of view of an, augmented reality display based on a respective three dimensional (3D) space position associated with each object in the display field of view;determining at least one effect on at least one physical property of the real object due to the collision based on one or more physical properties of the real object and physical interaction characteristics for the collision;generating image data of the real object simulating the at least one effect on the at least one physical property of the real object; anddisplaying the image data of the real object registered to the real object.2. The method of claim 1 , the physical interaction characteristics including a velocity of at least one of the real object and the virtual object in the display field of view.3. The method of further comprising:determining at least one effect on at least one physical property of the virtual object due to the collision based on its physical properties and the physical interaction characteristics for the collision;modifying image data of the virtual object for ...

Подробнее
07-11-2013 дата публикации

COLLABORATION ENVIRONMENT USING SEE THROUGH DISPLAYS

Номер: US20130293468A1
Принадлежит:

A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object. 1. A method for presenting a collaboration experience using a see through head mounted display apparatus , comprising:determining a three dimensional location of the apparatus, the apparatus includes one or more sensors and a see-through display;determining an orientation of the apparatus;determining a gaze of a wearer looking through the see-through display of the apparatus;determining a three dimensional location of at one or more users in the field of view of the user through the see-through display, the determining of the three dimensional location of the movable object is performed using the one or more sensors;receiving a selection of collaboration data and a selection of a collaboration environment within the field of view from the wearer;rendering virtual representations of the collaboration data in the field of view;determining persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment;outputting shared collaboration data in the form of virtual objects to users in the collaboration environment ...

Подробнее
07-11-2013 дата публикации

PRODUCT AUGMENTATION AND ADVERTISING IN SEE THROUGH DISPLAYS

Номер: US20130293530A1
Принадлежит:

An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home. 1. A method providing augmentation information to a wearer for a product in the field of view of a wearer , comprising:receiving input data from a wearer of a see through head mounted display device;determining a gaze direction in a field of view of the wearer from the input data;determining a location of the wearer;retrieving personal information of the wearer;identifying real world objects in the field of view of a wearer in the see through head mounted display device;retrieving augmentation data for the real world objects and matching objects in the field of view of the wearer to the augmentation data provided by a third party data source;presenting the augmentation information to a wearer associated with the identified products in the field of view.2. The method of wherein the augmentation information is advertising presented to the wearer as visual information in the field of view or as audible information.3. The method of wherein the augmentation information is targeted to the wearer based on the personal information of the wearer.4. The method of wherein the augmentation information is rendered to a wearer when the wearer is gazing at the ...

Подробнее
07-11-2013 дата публикации

INTELLIGENT TRANSLATIONS IN PERSONAL SEE THROUGH DISPLAY

Номер: US20130293577A1
Принадлежит:

A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful. 1. A method for presenting a translation of a real world expression to a wearer of a see through head mounted display apparatus , comprising:determining a gaze of a wearer looking through the see-through display of the apparatus;determining a three dimensional location of one or more objects in the field of view of the user through the see-through display, the determining of the three dimensional location of the object is performed using the one or more sensors;receiving a selection of data for translation in the field of view of the wearer by reference to the gaze of the wearer at one of the objects;analyzing the data for translation to provide input data; translating the input data into a translated form for the user; andrendering the translation in an audio or visual format in the see through head mounted display.2. The method of further including accessing a user profile for user information bearing on accuracy of the translation and wherein translating the data comprises evaluating input data for translation against the user information.3. The method of wherein the step of translating comprises converting the input data from a first language to a second language.4. The method of wherein the step of translating comprises providing supplemental knowledge for the input data on a subject matter identified in the input data.5. The method of wherein the supplemental ...

Подробнее
12-12-2013 дата публикации

AUGMENTED REALITY PLAYSPACES WITH ADAPTIVE GAME RULES

Номер: US20130328927A1
Принадлежит:

A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment. 1. A method for generating an augmented reality environment , comprising:determining one or more environmental requirements associated with a particular computing application;generating one or more virtual objects associated with the particular computing application;identifying one or more environmental features within a first real-world environment;determining if the one or more environmental requirements are not satisfied based on the one or more environmental features;adjusting the one or more virtual objects such that a particular degree of difficulty of the particular computing application is achieved in response to the determining if the one or more environmental requirements are not satisfied; anddisplaying on a mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the first real-world environment.2. The method of claim 1 , wherein:the one or more virtual objects include stationary virtual obstacles and ...

Подробнее
19-12-2013 дата публикации

COLOR VISION DEFICIT CORRECTION

Номер: US20130335435A1
Принадлежит:

Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user. 1. In a see-thru display device , a method to improve a color-resolving ability of a user of the see-thru display device based upon a color vision deficiency of the user , the method comprising:constructing virtual imagery to superpose onto real imagery viewable through the see-thru display device, the virtual imagery configured to accentuate a locus of the real imagery of a color poorly distinguishable based upon the color vision deficiency; anddisplaying the virtual imagery such that the virtual imagery is superposed onto the real imagery, in spatial registry with the real imagery, in a field of view of the see-thru display device.2. The method of wherein the virtual imagery is configured to shift the color of the locus.3. The method of wherein the virtual imagery is configured to increase a brightness of the locus.4. The method of wherein the virtual imagery is configured to delineate a perimeter of the locus.5. The method of wherein the virtual imagery is configured to overwrite the locus with text.6. The method of wherein the virtual imagery is configured to overwrite the locus with one or more of a symbol and a segmentation pattern.7. The method of wherein the virtual imagery is configured to write one or more of text and a symbol adjacent the locus.8. The method of further comprising acquiring an image of the real imagery with a front-facing camera of the see-thru display ...

Подробнее
19-12-2013 дата публикации

ENHANCING CAPTURED DATA

Номер: US20130335594A1
Принадлежит: MICROSOFT CORPORATION

Captured data is obtained, including various types of captured or recorded data (e.g., image data, audio data, video data, etc.) and/or metadata describing various aspects of the capture device and/or the manner in which the data is captured. One or more elements of the captured data that can be replaced by one or more substitute elements are determined, the replaceable elements are removed from the captured data, and links to the substitute elements are associated with the captured data. Links to additional elements to enhance the captured data are also associated with the captured data. Enhanced content can subsequently be constructed based on the captured data as well as the links to the substitute elements and additional elements. 1. A method comprising:obtaining captured data regarding an environment;determining, based at least in part on the captured data, one or more additional elements;adding, as associated with the captured data, one or more links to the one or more additional elements; andenabling enhanced content to be constructed using the one or more additional elements and at least part of the captured data.2. A method as recited in claim 1 , further comprising:determining one or more elements of the captured data that can be replaced by one or more substitute elements;removing the one or more elements from the captured data; andadding, as associated with the captured data, links to the one or more substitute elements.3. A method as recited in claim 1 , the captured data comprising an image.4. A method as recited in claim 3 , the one or more additional elements including audio data regarding the environment.5. A method as recited in claim 1 , the captured data comprising audio data.6. A method as recited in claim 5 , the one or more additional elements including image data regarding the environment.7. A method as recited in claim 1 , the captured data comprising metadata describing a geographic location of a device when the captured data was captured ...

Подробнее
02-01-2014 дата публикации

CONFIGURING AN INTERACTION ZONE WITHIN AN AUGMENTED REALITY ENVIRONMENT

Номер: US20140002444A1
Принадлежит:

Technology is described for automatically determining placement of one or more interaction zones in an augmented reality environment in which one or more virtual features are added to a real environment. An interaction zone includes at least one virtual feature and is associated with a space within the augmented reality environment with boundaries of the space determined based on the one or more real environment features. A plurality of activation criteria may be available for an interaction zone and at least one may be selected based on at least one real environment feature. The technology also describes controlling activation of an interaction zone within the augmented reality environment. In some examples, at least some behavior of a virtual object is controlled by emergent behavior criteria which defines an action independently from a type of object in the real world environment. 1. A method for adaptively configuring one or more interaction zones within an augmented reality environment including a real environment augmented and at least one virtual feature , the method comprising:an interaction zone including at least one virtual feature, real environment compatibility criteria for the at least one virtual feature, space dimensions and activation criteria for the at least one virtual feature;automatically selecting one or more interaction zone candidates based on one or more real environment features of the real environment satisfying real environment compatibility criteria for each candidate being identified in a (3D) mapping of the augmented reality environment;automatically selecting one or more interaction zones which satisfy zone compatibility criteria for configuration within the augmented reality environment from the one or more candidates;updating the 3D mapping with 3D space position data for the one or more interaction zones selected for configuration in the augmented reality environment; anddisplaying at least one virtual feature of at least one of ...

Подробнее
05-02-2015 дата публикации

Virtual light in augmented reality

Номер: US20150035832A1
Принадлежит: Microsoft Technology Licensing LLC

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
19-02-2015 дата публикации

EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20150049114A1
Принадлежит:

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. 1. A method for presenting a personalized experience using a personal see-through A/V apparatus , comprising:accessing a first exercise routine for a first person;accessing data for a second person for a second exercise routine different from the first exercise routine;estimating a performance of how the second person would perform the first exercise routine based on the data; andpresenting a virtual image of someone performing the first exercise routine based on the estimated performance so that the first person can see the virtual image inserted into a real scene viewed through the personal see-through A/V apparatus as the first person performs the first exercise routine.2. The method of claim 1 , wherein:the estimated performance is based on past performance of the second exercise routine.3. The method of claim 1 , wherein:the estimated performance is based on a live performance of the second exercise routine.4. The method of claim 1 , wherein the presenting a virtual image of someone performing the first exercise routine based on the estimated performance includes:presenting an avatar of the second person that integrates the second person into an environment of the first person.5. The method of claim 4 , wherein the second person is exercising at a remote location from the first person.6. The method of claim 1 , wherein the accessing data for the second person for the second exercise routine different from the first exercise routine includes:accessing real time exercise data for the second person at a location that is remote from the personal see-through A/V apparatus.7. The method of claim 6 , ...

Подробнее
08-05-2014 дата публикации

CROSS-PLATFORM AUGMENTED REALITY EXPERIENCE

Номер: US20140128161A1
Принадлежит:

A plurality of game sessions are hosted at a server system. A first computing device of a first user is joined to a first multiplayer gaming session, the first computing device including a see-through display. Augmentation information is sent to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user. A second computing device of a second user is joined to the first multiplayer gaming session. Experience information is sent to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user. 1. A method for hosting a plurality of game sessions at a server system , the method comprising:joining a first computing device of a first user to a first multiplayer gaming session, the first computing device including a see-through display;sending augmentation information to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user;joining a second computing device of a second user to the first multiplayer gaming session; andsending experience information to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user.2. The method of claim 1 , wherein the cross-platform representation of the augmented reality experience is configured for visual presentation via a display device connected to the second computing device.3. The method of claim 1 , wherein the cross-platform representation of the augmented reality experience is presented to the second user in a first-person mode.4. The method of claim 1 , wherein the cross-platform representation of the augmented reality experience is presented to the second user in a third-person mode.5. The method of claim 1 , wherein the experience information includes aspects of a physical ...

Подробнее
17-03-2016 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20160077785A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A display device , comprising:one or more sensors;a logic device; and receive an input of an identity of a selected real object based on one or more of input received from one or more sensors of the display device and a selection of a location on a map,', 'receive a request to link a user-specified executable virtual object with the selected real object such that the virtual object is executable by a selected user in proximity to the selected real object;', 'link the virtual object with the selected real object; and', 'send information regarding the virtual object and the linked real object to a remote service., 'a storage device holding instructions executable by the logic device to'}2. The display device of claim 1 , wherein the instructions are executable by the logic device to receive the request to link the user-specified executable virtual object with the selected real object by receiving a voice command from the user.3. The display device of claim 1 , wherein the instructions are executable by the logic device to receive the input of the identity of the real object by receiving image data of a background scene from an image sensor and determining which real object from a plurality of real objects in the background scene is the selected real object.4. The display ...

Подробнее
24-03-2016 дата публикации

PROVIDING LOCATION OCCUPANCY ANALYSIS VIA A MIXED REALITY DEVICE

Номер: US20160086382A1
Принадлежит:

The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. 120.-. (canceled)21. A machine-system implemented method of determining that a first among a plurality of areas is not occupied by one or more persons of interest , the method comprising:receiving person selection criteria from a user;automatically determining a current location of the user;automatically identifying a first among a plurality of automatically searchable areas each capable of containing one or more persons and each capable of automated detecting of identities of one or more persons in that area, the identified first area being most proximate to the user;based on use of one or more persons identifying services, automatically determining whether the identified first area contains any persons satisfying the person selection criteria received from the user; andif the one or more persons identifying services fail to indicate presence of at least one person satisfying the person selection criteria within the identified first area, indicating to the user that the first area does not contain any persons satisfying the person selection criteria.22. The method of wherein the used one or more ...

Подробнее
14-05-2015 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20150130689A1
Принадлежит:

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A portable see-through display device , comprising:one or more sensors;a logic subsystem; and receive an input of an identity of a selected real object based on one or more of input received from one or more sensors of the see-through display device and a selection of a location on a map,', 'receive a request to link a user-specified executable virtual object with the selected real object such that the virtual object is executable by a selected user in proximity to the selected real object;, 'a data-holding subsystem holding instructions executable by the logic subsystem to'}link the virtual object with the selected real object; andsend information regarding the virtual object and the linked real object to a remote service.2. The display device of claim 1 , wherein the instructions are executable to receive the request to link the user-specified executable virtual object with the selected real object by receiving a voice command from the user.3. The display device of claim 1 , wherein the instructions are executable to receive the input of the identity of the real object by receiving image data of a background scene from an image sensor and determining which real object from a plurality of real objects in the background scene is the selected real object.4. The display device ...

Подробнее
04-05-2017 дата публикации

Method to Control Perspective for a Camera-Controlled Computer

Номер: US20170123505A1
Принадлежит:

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control. 1. A method for changing a perspective of a virtual scene displayed on a display device , comprising:receiving data captured by a capture device, the capture device capturing movement or position of at least part of a first user relative to a display device and movement or position of at least part of an object controlled by the first user relative to the display device;analyzing the data to determine that the at least part of the first user or the object moved in a direction relative to the display device and to determine a location formed based at least upon a combination of a position of the first user relative to the display device and a position of the object relative to the display device; andbased at least upon determining that the at least part of the first user or the object moved in the direction relative to the display device and determining the location formed based at least upon the combination of the position of the first user relative to the display device and the position of the object relative to the display device, modifying the perspective of the virtual scene displayed on the display device by moving the perspective of the virtual scene to the location formed based at least upon the combination of the position of the first user relative to the display device and the position of the object relative to the display device and in the direction that the at least part of the first user or the object moved.2. The method of claim 1 , wherein analyzing the data to determine ...

Подробнее
18-09-2014 дата публикации

INTERACTING WITH USER INTERFACE VIA AVATAR

Номер: US20140267311A1
Принадлежит: MICROSOFT CORPORATION

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. 1. In a computing device , a method of presenting a user interface , the method comprising:receiving depth data from a depth-sensing camera;locating a plurality of persons in the depth data;determining a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface;forming an image of an avatar representing the selected person;outputting to the display device an image of a user interface, the user interface comprising an interactive user interface control;outputting to the display device the image of the avatar such that the avatar appears to face the user interface control;detecting a motion of the selected person via the depth data; andoutputting to the display ...

Подробнее
07-07-2016 дата публикации

PRODUCT AUGMENTATION AND ADVERTISING IN SEE THROUGH DISPLAYS

Номер: US20160196603A1
Принадлежит:

An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home. 1. A method providing augmentation information to a wearer , comprising:(a) displaying a list of products on a head mounted display device while at an establishment having a plurality of products including at least some of the products on the list of products;(b) determining a location in the establishment of a product on the list of products;(c) presenting augmentation information via the head mounted display facilitating purchase of the product whose location was determined in said step (b).2. The method of wherein presenting augmentation information comprises the step of directing the wearer to a location of the product whose location was determined in said step (b).3. The method of wherein presenting augmentation information comprises the step of displaying the location of the product within the establishment.4. The method of wherein presenting augmentation information comprises the step of audibly relaying the location of the product within the establishment.5. The method of wherein presenting augmentation information comprises the step of highlighting the product on the list of products when the wearer is in the vicinity of the product.6. The ...

Подробнее
06-10-2016 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20160292850A1
Принадлежит: Microsoft Technology Licensing, LLC

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. 1. A method for generating an augmented reality environment using a mobile device , comprising:capturing images of an environment using the mobile device;determining an exercise being performed by an end user of the mobile device using the captured images;determining a performance of the end user during the exercise using the captured images;detecting that the end user has completed the exercise using the captured images; andgenerating and displaying metrics for the performance of the end user compared with a prior exercise history for the end user in response to detecting that the end user has completed the exercise.2. The method of claim 1 , further comprising:determining a location of the mobile device, the determining an exercise being performed by the end user includes determining the exercise based on the location of the mobile device.3. The method of claim 1 , wherein:the determining an exercise being performed by an end user of the mobile device includes identifying an exercise machine being used by the end user using the captured images.4. The method of claim 3 , wherein:the determining a performance of the end user includes determining a number of repetitions performed by the end user using the exercise machine.5. The method of claim 1 , wherein:the determining a performance of the end user includes determining a distance traveled by the end user using the captured images.6. The method of claim 1 , wherein:the determining a performance of the end user includes estimating a number of calories burned by the end user using the captured images.7. The method of claim 1 , wherein:the mobile device comprises a head mounted display device.8. An electronic device for generating an augmented ...

Подробнее
12-11-2015 дата публикации

USER AUTHENTICATION ON DISPLAY DEVICE

Номер: US20150324562A1
Принадлежит:

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated. 1. A method for authenticating a user of a computing system , the method comprising:detecting a voice input via a microphone;detecting one or more of a position and an orientation of the display device via data received from one or more of a position sensor and an orientation sensor;detecting one or more user gaze locations via a gaze sensor; andauthenticating the user based upon the voice input, the one or more of the position and the orientation of the display device, and the one or more user gaze locations.2. The method of claim 1 , further comprising comparing the one or more user gaze locations to a predefined order of gaze locations.3. The method of claim 2 , wherein authenticating the user comprises authenticating the user if the one or more user gaze locations were input in the predefined order.4. The method of claim 1 , wherein the one or more of the position sensor and the orientation sensor comprise one or more of an image sensor and a motion sensor.5. The method of claim 4 , wherein detecting one or more of the position and the orientation of the display ...

Подробнее
01-12-2016 дата публикации

AUGMENTED REALITY SPACES WITH ADAPTIVE RULES

Номер: US20160350978A1
Принадлежит: Microsoft Technology Licensing, LLC

A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment. 1. A method for generating an augmented reality environment , comprising:displaying one or more images corresponding with a first virtual object within the augmented reality environment using a mobile device;detecting a particular sound while displaying the one or more images;determining a distance between the first virtual object within the augmented reality environment and the mobile device displaying the one or more images;setting a degree of transparency for the first virtual object based on the distance between the first virtual object within the augmented reality environment and the mobile device in response to detecting the particular sound;generating one or more new images corresponding with the first virtual object based on the degree of transparency; anddisplaying the one or more new images corresponding with the first virtual object within the augmented reality environment using the mobile device in response to detecting the particular sound.2. The method of claim 1 , further comprising:identifying one or more real-world objects within ...

Подробнее
17-12-2015 дата публикации

TECHNIQUES FOR USING HUMAN GESTURES TO CONTROL GESTURE UNAWARE PROGRAMS

Номер: US20150363005A1
Принадлежит:

A capture device can detect gestures made by a user. The gestures can be used to control a gesture unaware program. 1. A system , comprising:a computing environment including a processor coupled to a computer readable storage medium, the computer readable storage medium including executable instructions for a program, wherein a user interface for the program is configured to only receive non-gesture user interface commands;a capture device coupled to the computing environment, wherein the capture device is configured to capture images and identify gestures in the images, wherein the capture device is configured to capture audio and identify audio commands in the captured audio; andwherein the non-gesture user interface commands used to control the user interface of the program are generated from gestures identified in the images and audio commands identified in the captured audio.2. The system of claim 1 , wherein the capture device is further configured to generate scancodes for keys of keyboards based on the gestures and send the scancodes to the computing environment.3. The system of claim 1 , wherein the capture device is further configured to generate packets that include mouse button state and mouse position information based on the gestures and send the packets to the computing environment.4. The system of claim 1 , wherein the capture device is configured to generate state information for a videogame controller and send the state information to the computing environment.5. The system of claim 1 , wherein the computing environment is configured to receive gesture information claim 1 , generate keyboard messages claim 1 , and send the keyboard messages to the program.6. The system of claim 1 , wherein the computing environment is configured to receive gesture information claim 1 , generate mouse messages claim 1 , and send the mouse messages to the program.7. The system of claim 1 , wherein the computing environment is configured to receive gesture information ...

Подробнее
23-02-2016 дата публикации

Virtual spectator experience with a personal audio/visual apparatus

Номер: US9268406B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.

Подробнее
15-03-2016 дата публикации

Personal audio/visual system for providing an adaptable augmented reality environment

Номер: US9285871B2
Принадлежит: Microsoft Technology Licensing LLC

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.

Подробнее
15-03-2016 дата публикации

Representing a location at a previous time period using an augmented reality display

Номер: US9286711B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view.

Подробнее
28-03-2017 дата публикации

Personal audio/visual apparatus providing resource management

Номер: US9606992B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display.

Подробнее
24-05-2016 дата публикации

Enhancing a sport using an augmented reality display

Номер: US9345957B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar.

Подробнее
22-09-2015 дата публикации

Techniques for using human gestures to control gesture unaware programs

Номер: US9141193B2
Принадлежит: Microsoft Technology Licensing LLC

A capture device can detect gestures made by a user. The gestures can be used to control a gesture unaware program.

Подробнее
18-12-2012 дата публикации

Recognizing user intent in motion capture system

Номер: US8334842B2
Принадлежит: Microsoft Corp

Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.

Подробнее
13-01-2015 дата публикации

Tracking groups of users in motion capture system

Номер: US8933884B2
Принадлежит: Microsoft Corp

In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.

Подробнее