Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 274. Отображено 184.
05-09-2017 дата публикации

Mixed reality interactions

Номер: US0009754420B2

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

Подробнее
08-09-2015 дата публикации

Indicating out-of-view augmented reality images

Номер: US0009129430B2

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

Подробнее
20-12-2016 дата публикации

Constructing augmented reality environment with pre-computed lighting

Номер: US0009524585B2

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
25-12-2014 дата публикации

GESTURE TOOL

Номер: US20140380254A1
Принадлежит:

Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.

Подробнее
07-11-2017 дата публикации

Mixed reality display accommodation

Номер: US0009812046B2

A mixed reality accommodation system and related methods are provided. In one example, a head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A mixed reality safety program is configured to receive a holographic object and associated content provider ID from a source. The program assigns a trust level to the object based on the content provider ID. If the trust level is less than a threshold, the object is displayed according to a first set of safety rules that provide a protective level of display restrictions. If the trust level is greater than or equal to the threshold, the object is displayed according to a second set of safety rules that provide a permissive level of display restrictions that are less than the protective level of display restrictions.

Подробнее
12-03-2019 дата публикации

Constructing augmented reality environment with pre-computed lighting

Номер: US10229544B2

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
26-06-2015 дата публикации

SYNTHESIZED REALITY STADIUM

Номер: KR1020150071611A
Принадлежит:

A computing system includes: a see-through display device, a logic sub-system, and a storage system storing commands. The logic sub-system executes the commands to display a virtual stadium, a user-controlled avatar, and another avatar of a second party on the see-through display device. When the virtual stadium is displayed on the see-through display device, the virtual stadium is shown as being integrated with the physical space. The command also displays the updated user-controlled avatar on the see-through display device in response to a received user input. COPYRIGHT KIPO 2015 ...

Подробнее
15-12-2011 дата публикации

CONTEXTUAL TAGGING OF RECORDED DATA

Номер: US20110304774A1
Принадлежит: MICROSOFT CORPORATION

Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.

Подробнее
21-06-2012 дата публикации

INTELLIGENT GAMEPLAY PHOTO CAPTURE

Номер: US20120157200A1
Принадлежит: MICROSOFT CORPORATION

Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.

Подробнее
26-06-2015 дата публикации

AUGMENTED REALITY OVERLAY FOR CONTROL DEVICE

Номер: KR1020150071594A
Принадлежит:

An embodiment of the present invention discloses a method for providing educational information for a control device. According to the embodiment of the present invention, the method, which is used in a transmissive display device including a transmissive display unit and an outward image sensor, includes the steps of: obtaining an image of a scene seen through the transmissive display unit; and detecting a control device in the scene. The method also includes the steps of: searching for the information relevant to the functions of a bidirectional element of the control device; and displaying the image on the transmissive display unit to enhance the appearance of the bidirectional element of the control device as the image data relevant to the functions of the bidirectional element. COPYRIGHT KIPO 2015 (107) Remote computing device (107a) Database (112) Network ...

Подробнее
08-05-2014 дата публикации

USER AUTHENTICATION ON DISPLAY DEVICE

Номер: US20140125574A1
Принадлежит: Individual

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

Подробнее
05-02-2014 дата публикации

Automated sensor driven match-making

Номер: CN103561831A
Принадлежит:

A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.

Подробнее
08-05-2014 дата публикации

CONSTRUCTING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING

Номер: US20140125668A1
Принадлежит: Individual

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
10-07-2014 дата публикации

MIXED REALITY DISPLAY ACCOMMODATION

Номер: US20140192084A1
Принадлежит:

A mixed reality accommodation system and related methods are provided. In one example, a head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A mixed reality safety program is configured to receive a holographic object and associated content provider ID from a source. The program assigns a trust level to the object based on the content provider ID. If the trust level is less than a threshold, the object is displayed according to a first set of safety rules that provide a protective level of display restrictions. If the trust level is greater than or equal to the threshold, the object is displayed according to a second set of safety rules that provide a permissive level of display restrictions that are less than the protective level of display restrictions. 1. A method for displaying a holographic object to accommodate a mixed reality environment including a physical environment , comprising:providing a head-mounted display device configured to be worn by a user and operatively connected to a computing device, the head-mounted display device including a plurality of sensors and a display system;receiving physical environment data from the physical environment via one or more of the sensors;receiving the holographic object from a source, the holographic object associated with a content provider ID;assigning a trust level to the holographic object based on the content provider ID; applying a first set of safety rules that provide a protective level of display restrictions for the holographic object; and', 'displaying the holographic object via the display system according to the first set of safety rules;, 'if the trust level is less than a trust level threshold, then applying a second set of safety rules that provide a permissive level of display restrictions for the holographic object; and', 'displaying the holographic object via the display system according to the second set of safety rules;, 'if the ...

Подробнее
26-06-2015 дата публикации

USER AUTHENTICATION ON DISPLAY DEVICE

Номер: KR1020150071592A
Принадлежит:

An embodiment of the present invention relates to a method for authenticating a user on a display device. For example, according to one embodiment of the present invention, one or more virtual images are displayed on the display device. One or more virtual images include a set of augmented reality features. Also, the method according to one embodiment of the present invention includes the steps of: identifying one or more movements of the user through data received from a sensor of the display device; and comparing the identified movement of the user with the predetermined set of authentication information for the user to link the user authentication in a predetermined order of the augmented reality features. If the identified movement shows that the user selects the augmented reality features of the predetermined order, the user is authenticated. If the identified movement shows that the user does not select the augmented reality features of the predetermined order, the user is not authenticated ...

Подробнее
21-06-2012 дата публикации

DRIVING SIMULATOR CONTROL WITH VIRTUAL SKELETON

Номер: US20120157198A1
Принадлежит: MICROSOFT CORPORATION

Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.

Подробнее
08-08-2019 дата публикации

CONSTRUCTING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING

Номер: US20190244430A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
11-02-2014 дата публикации

Method to control perspective for a camera-controlled computer

Номер: US0008649554B2

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
28-07-2015 дата публикации

User authentication on augmented reality display device

Номер: US0009092600B2

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

Подробнее
27-11-2014 дата публикации

BODY-LOCKED PLACEMENT OF AUGMENTED REALITY OBJECTS

Номер: US20140347390A1
Принадлежит:

Embodiments are disclosed that relate to placing virtual objects in an augmented reality environment. For example, one disclosed embodiment provides a method comprising receiving sensor data comprising one or more of motion data, location data, and orientation data from one or more sensors located on a head-mounted display device, and based upon the motion data, determining a body-locking direction vector that is based upon an estimated direction in which a body of a user is facing. The method further comprises positioning a displayed virtual object based on the body-locking direction vector.

Подробнее
21-06-2012 дата публикации

FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON

Номер: US20120155705A1
Принадлежит: MICROSOFT CORPORATION

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.

Подробнее
13-11-2014 дата публикации

CALIBRATION OF EYE LOCATION

Номер: US20140333665A1
Принадлежит:

Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.

Подробнее
13-09-2016 дата публикации

Object tracking

Номер: US0009443414B2

Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, if the inanimate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object.

Подробнее
29-09-2015 дата публикации

Display with blocking image generation

Номер: US0009147111B2

A blocking image generating system and related methods include a head-mounted display device having an opacity layer. A method may include receiving a virtual image to be presented by display optics in the head-mounted display device. Lighting information and an eye-position parameter may be received from an optical sensor system in the head-mounted display device. A blocking image may be generated in the opacity layer of the head-mounted display device based on the lighting information and the virtual image. The location of the blocking image in the opacity layer may be adjusted based on the eye-position parameter.

Подробнее
21-06-2012 дата публикации

SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD

Номер: US20120157203A1
Принадлежит: Microsoft Corporation

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control. 1. A data holding device holding instructions executable by a logic subsystem to:render a three-dimensional virtual gaming world for display on a display device;receive a virtual skeleton, including a plurality of joints, the plurality of joints including a left hand joint and a right hand joint, the virtual skeleton providing a machine readable representation of a human target observed with a three-dimensional depth camera;render a control cursor in the three-dimensional virtual gaming world for display on the display device, a screen space position of the control cursor tracking a position of the left hand joint or the right hand joint of the virtual skeleton as modeled from a world space position of a corresponding hand of the human target;lock the control cursor to an object in the three-dimensional virtual gaming world if a grab threshold of the object is overcome;when the control cursor is locked to the object, move the object with the control cursor such that the world space position, of the corresponding hand of the human target moves the object in the three-dimensional virtual gaming world; andunlock the control cursor from the object at a release position of the object within the three-dimensional virtual gaming world if a release threshold of the object is overcome.2. The data holding device of claim 1 , where world space parameters of the corresponding hand overcome the grab threshold of the object if the corresponding hand is closed by the human target.3. The data holding device of claim 1 , where world space parameters of the corresponding hand overcome the grab threshold of the object if tire screen space ...

Подробнее
15-12-2011 дата публикации

INTERACTING WITH USER INTERFACE VIA AVATAR

Номер: US20110304632A1
Принадлежит: MICROSOFT CORPORATION

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

Подробнее
13-02-2018 дата публикации

Constructing augmented reality environment with pre-computed lighting

Номер: US0009892562B2

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
21-06-2012 дата публикации

MODELING AN OBJECT FROM IMAGE DATA

Номер: US20120154618A1
Принадлежит: MICROSOFT CORPORATION

A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape. 1. A method for constructing a virtual model of an object based on video of the object in motion , the method comprising:identifying in an image from the video a set of reference points on the object;for each reference point identified, observing a displacement of that reference point in response to a motion of the object;grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement; andfitting the grouped-together reference points to a shape.2. The method of claim 1 , wherein the shape comprises an ellipsoid.3. The method of claim 1 , wherein the shape is one of a plurality of shapes to which the grouped-together reference points are fit.4. The method of claim 1 , wherein the image comprises a rectangular array of pixels and encodes one or more of a brightness claim 1 , a color and a polarization state for each pixel.5. The method of claim 4 , wherein the image further encodes a depth coordinate for each pixel.6. The method of claim 1 , wherein said grouping together comprises grouping those reference points for which the observed displacement is within an interval of a predicted displacement claim 1 , and wherein the predicted displacement is predicted based on the common translational or rotational motion.7. The method of claim 1 , wherein said grouping together comprises forming a plurality of groups of the identified reference points claim 1 , wherein claim 1 , for each group claim ...

Подробнее
29-06-2015 дата публикации

METHOD FOR PROVIDING CROSS-PLATFORM AUGMENTED REALITY EXPERIENCE

Номер: KR1020150071824A
Принадлежит:

A plurality of game sessions are hosted in a server system. A first computing device of a first user is joined in a first multiplayer gaming session, and the first computing device comprises a see-through display. An augmented data is sent to the first computing device for the first multimedia gaming session to provide augmented reality experience to the first user. A second computing device of a second user is joined in the multimedia gaming session. The experience data is sent to the second computing device for the first multimedia gaming session in order to provide a cross-platform representation of the augmented reality experience to the second user. COPYRIGHT KIPO 2015 (1) Placement (102) Game server (116) Network (2) Placement ...

Подробнее
14-04-2015 дата публикации

Automatic depth camera aiming

Номер: US0009008355B2

Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.

Подробнее
26-06-2015 дата публикации

OBJECT TRACKING METHOD

Номер: KR1020150071593A
Принадлежит:

An embodiment of the present invention discloses an automatic object tracking method, and more specifically, to a method for operating a mobile computing device including an image sensor. The method includes the steps of: acquiring an image data; identifying an inanimate moveable object from the image data; determining whether the inanimate moveable object is a tracking object; storing the status information of the inanimate moveable object if the inanimate moveable object is a tracking object; detecting a trigger to provide a notification regarding the status of the inanimate moveable object; and outputting the notification regarding the status of the inanimate moveable object. COPYRIGHT KIPO 2015 ...

Подробнее
26-06-2015 дата публикации

METHOD FOR CONFIGURING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING

Номер: KR1020150071595A
Принадлежит:

The present invention discloses various embodiments related to the effective configuration of an augmented reality environment having global lighting effects. One of the embodiments provides a method for displaying an augmented reality image through a display device. The method includes the steps of: receiving image data capturing the images of the local environment of the display device; and identifying the physical features of the local environment through using the image data. The method also includes the steps of: configuring the augmented reality image of a virtual structure to be displayed on the physical features in the spatial registration having the physical features from a user′s point of view; and outputting the augmented reality to the display device. The augmented reality image includes multiple modular and virtual structure segments arranged adjacent to each other to form the features of the virtual structure. Each of the modular and virtual structure segments includes the ...

Подробнее
22-03-2016 дата публикации

Interacting with user interface via avatar

Номер: US0009292083B2

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

Подробнее
20-02-2014 дата публикации

AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES

Номер: US20140049558A1
Принадлежит:

Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element. 1. On a see-through display device comprising a see-through display and an outward-facing image sensor , a method for providing instructional information for control devices , the method comprising:acquiring an image of a scene viewable through the see-through display;detecting a control device in the scene;retrieving information pertaining to a function of an interactive element of the control device; anddisplaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element.2. The method of claim 1 , wherein the image comprises a graphical element related to the function of the interactive element claim 1 , the graphical element being displayed on the see-through display over the interactive element.3. The method of claim 1 , wherein the image comprises a text box having text information describing the interactive element.4. The method of claim 3 , further comprising receiving a selection of the text box claim 3 , and in response displaying additional information on the see-through display device.5. The method of claim 1 , wherein the image comprises an animation.6. The method of claim 1 , further comprising detecting a gaze of a user of the see-through display device at a selected interactive element of the ...

Подробнее
24-02-2015 дата публикации

Executable virtual objects associated with real objects

Номер: US0008963805B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
26-12-2017 дата публикации

Display resource management

Номер: US0009851787B2

A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode.

Подробнее
05-08-2010 дата публикации

MAPPING A NATURAL INPUT DEVICE TO A LEGACY SYSTEM

Номер: US20100199229A1
Принадлежит: Microsoft Corporation

Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.

Подробнее
12-03-2014 дата публикации

Matching users over a network

Номер: CN103635933A
Принадлежит:

Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking.

Подробнее
10-06-2014 дата публикации

Interacting with user interface via avatar

Номер: US0008749557B2

Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

Подробнее
26-07-2016 дата публикации

Gesture shortcuts

Номер: US0009400559B2

Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.

Подробнее
13-02-2014 дата публикации

OBJECT TRACKING

Номер: US2014044305A1
Принадлежит:

Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, ate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object.

Подробнее
15-04-2014 дата публикации

Automated sensor driven match-making

Номер: US0008696461B2

A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.

Подробнее
20-12-2016 дата публикации

Method to control perspective for a camera-controlled computer

Номер: US0009524024B2

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
05-05-2015 дата публикации

Automated sensor driven friending

Номер: US0009025832B2

A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player.

Подробнее
15-08-2017 дата публикации

Virtual environment generating system

Номер: US0009734633B2

A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.

Подробнее
08-05-2014 дата публикации

CROSS-PLATFORM AUGMENTED REALITY EXPERIENCE

Номер: US2014128161A1
Принадлежит:

A plurality of game sessions are hosted at a server system. A first computing device of a first user is joined to a first multiplayer gaming session, the first computing device including a see-through display. Augmentation information is sent to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user. A second computing device of a second user is joined to the first multiplayer gaming session. Experience information is sent to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user.

Подробнее
01-12-2015 дата публикации

Executable virtual objects associated with real objects

Номер: US0009201243B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
22-05-2018 дата публикации

Object tracking

Номер: US0009979809B2

Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, if the inanimate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object.

Подробнее
21-11-2017 дата публикации

Driving simulator control with virtual skeleton

Номер: US0009821224B2

Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.

Подробнее
03-01-2013 дата публикации

MATCHING USERS OVER A NETWORK

Номер: US20130007013A1
Принадлежит: MICROSOFT CORPORATION

Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking.

Подробнее
14-06-2016 дата публикации

Body-locked placement of augmented reality objects

Номер: US0009367960B2

Embodiments are disclosed that relate to placing virtual objects in an augmented reality environment. For example, one disclosed embodiment provides a method comprising receiving sensor data comprising one or more of motion data, location data, and orientation data from one or more sensors located on a head-mounted display device, and based upon the motion data, determining a body-locking direction vector that is based upon an estimated direction in which a body of a user is facing. The method further comprises positioning a displayed virtual object based on the body-locking direction vector.

Подробнее
22-11-2016 дата публикации

Indicating out-of-view augmented reality images

Номер: US0009501873B2

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

Подробнее
05-01-2017 дата публикации

MIXED REALITY INTERACTIONS

Номер: US20170004655A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. 1. A mixed reality interaction system comprising:a head-mounted display device including a display system, and a camera; and identify a physical object in a mixed reality environment based on an image captured by the camera;', 'determine an interaction context for the identified physical object based on one or more aspects of the mixed reality environment;', 'programmatically select an interaction mode for the identified physical object based on the interaction context and a stored profile for the physical object;, 'a processor configured tointerpret a user input directed at the physical object correspond to a virtual action based on the selected interaction mode;execute the virtual action to modify an appearance of a virtual object associated with the physical object; anddisplay the virtual object via the head-mounted display device with the modified appearance.2. The mixed reality interaction system of claim 1 , wherein the processor is further configured to:present a first query to confirm an accuracy of an identity of the physical object; andin response to the query, ...

Подробнее
23-06-2015 дата публикации

Shared collaboration using display device

Номер: US0009063566B2

Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.

Подробнее
05-12-2017 дата публикации

Executable virtual objects associated with real objects

Номер: US0009836889B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
01-06-2010 дата публикации

Thicknessing gauge

Номер: USD616774S
Принадлежит: LATTA STEPHEN, LIE-NIELSEN THOMAS

Подробнее
09-04-2013 дата публикации

Gesture coach

Номер: US0008418085B2

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.

Подробнее
07-10-2014 дата публикации

Gesture tool

Номер: US0008856691B2

Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.

Подробнее
06-12-2012 дата публикации

AUTOMATED SENSOR DRIVEN FRIENDING

Номер: US20120311031A1
Принадлежит: MICROSOFT CORPORATION

A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player. 1. A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends , the method comprising:recognizing the player;automatically identifying an observer within a threshold proximity to the player; andadding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player.2. The method of claim 1 , where automatically identifying the observer includes matching an observed skeletal model to a profile skeletal model included as part of a user profile saved in a network accessible database claim 1 , the observed skeletal model being derived from three dimensional depth information collected via a depth camera imaging the observer.3. The method of claim 1 , where automatically identifying the observer includes matching an observed voice pattern to a profile voice signature included as part of a user profile saved in a network accessible database claim 1 , the observed voice pattern being derived from audio recordings collected via a microphone listening to the observer.4. The method of claim 1 , where automatically identifying the observer includes matching an observed facial image to a profile face signature included as part of a user profile saved in a network accessible database claim 1 , the observed facial image being derived from a digital image collected via a camera imaging the observer.5. The method of claim 1 , where automatically identifying ...

Подробнее
26-06-2013 дата публикации

METHOD AND COMPUTER READABLE STORAGE MEDIA FOR CAPTURE OF PHOTOS OF INTELLIGENT GAME

Номер: AR0000084778A1
Принадлежит:

Se describen implementaciones para identificar, capturar y presentar fotorepresentaciones de alta calidad de actos que ocurren durante el desarrollo de un juego que emplea tecnología de ingreso de seguimiento de movimientos. Como ejemplo, se revela un método que incluye capturar, a través de una interfaz óptica, una pluralidad de fotografías de un jugador en un volumen de capturas durante la ejecución del juego electrónico. El método adicionalmente incluye, para cada fotografía capturada de la pluralidad de fotografías capturadas, comparar un parámetro de puntuación en base a eventos contra un evento representado por, o correspondiente a, la fotografía capturada. El método adicionalmente incluye asignar puntuaciones respectivas a la pluralidad de fotografías capturadas en base, por lo menos en parte, a la comparación con el parámetro de puntuación basado en eventos. El método adicionalmente incluye asociar las fotografías capturadas en un medio de almacenamiento electrónico con las respectivas ...

Подробнее
03-01-2017 дата публикации

Touch and social cues as inputs into a computer

Номер: US0009536350B2

A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).

Подробнее
06-12-2012 дата публикации

AUTOMATED SENSOR DRIVEN MATCH-MAKING

Номер: US20120309534A1
Принадлежит: MICROSOFT CORPORATION

A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria. 1. A method of matching a player of a multi-player game with a remote participant , the method comprising:recognizing the player;automatically identifying an observer within a threshold proximity to the player;using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game; andwhen selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.2. The method of claim 1 , where automatically identifying the observer includes matching an observed skeletal model to a profile skeletal model included as part of a user profile saved in a network accessible database claim 1 , the observed skeletal model being derived from three dimensional depth information collected via a depth camera imaging the observer.3. The method of claim 1 , where automatically identifying the observer includes matching an observed voice pattern to a profile voice signature included as part of a user profile saved in a network accessible database claim 1 , the observed voice pattern being derived from audio recordings collected via a microphone listening to the observer.4. The method of claim 1 , where automatically identifying the observer includes matching an observed facial image to a profile face signature included as part of a user profile saved in a network accessible database claim 1 , the observed facial image being derived from a digital image collected via a ...

Подробнее
21-09-2021 дата публикации

Touch and social cues as inputs into a computer

Номер: US0011127210B2

A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).

Подробнее
07-06-2016 дата публикации

Transitions between body-locked and world-locked augmented reality

Номер: US0009361732B2

Various embodiments relating to controlling a see-through display are disclosed. In one embodiment, virtual objects may be displayed on the see-through display. The virtual objects transition between having a position that is body-locked and a position that is world-locked based on various transition events.

Подробнее
30-10-2014 дата публикации

MIXED REALITY INTERACTIONS

Номер: US20140320389A1
Принадлежит:

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

Подробнее
23-01-2018 дата публикации

Avoidance of color breakup in late-stage re-projection

Номер: US0009874932B2

One embodiment provides a method to display video such as computer-rendered animation or other video. The method includes assembling a sequence of video frames featuring a moving object, each video frame including a plurality of subframes sequenced for display according to a schedule. The method also includes determining a vector-valued differential velocity of the moving object relative to a head of an observer of the video. At a time scheduled for display of a first subframe of a given frame, first-subframe image content transformed by a first transform is displayed. At a time scheduled for display of the second subframe of the given frame, second-subframe image content transformed by a second transform is displayed. The first and second transforms are computed based on the vector-valued differential velocity to mitigate artifacts.

Подробнее
28-07-2015 дата публикации

Augmented reality display of scene behind surface

Номер: US0009092896B2

Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display.

Подробнее
05-07-2016 дата публикации

Combining gestures beyond skeletal

Номер: US0009383823B2

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

Подробнее
22-06-2017 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20170178410A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
04-11-2010 дата публикации

Method to Control Perspective for a Camera-Controlled Computer

Номер: US20100281439A1
Принадлежит: Microsoft Corporation

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
19-01-2016 дата публикации

Calibration of eye location

Номер: US0009239460B2

Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.

Подробнее
29-05-2014 дата публикации

HEAD-MOUNTED DISPLAY RESOURCE MANAGEMENT

Номер: US20140145914A1
Принадлежит:

A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode. 1. A resource management system , comprising:a head-mounted display device configured to be worn by a user and operatively connected to a computing device, the head-mounted display device including a plurality of sensors and a display system for presenting holographic objects, and operate a selected sensor of the plurality of sensors in a default power mode to achieve a selected level of sensor fidelity;', 'receive user-related information from one or more of the plurality of sensors, the user-related information selected from the group consisting of audio information, user gaze information, user location information, user movement information, user image information, and user physiological information;', 'determine whether target information is detected in the user-related information; and', 'where the target information is detected, adjust the selected sensor to operate in a reduced power mode that uses less power than the default power mode, thereby achieving a reduced level of sensor fidelity., 'a resource management program executed by a processor of the computing device, the resource management program configured to2. The resource management system of claim 1 , wherein the target information is selected from the group consisting of context-identifying audio information claim 1 , a user gaze that is fixed on ...

Подробнее
25-11-2014 дата публикации

Multiplayer game invitation system

Номер: US0008894484B2

A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in ...

Подробнее
13-04-2017 дата публикации

TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER

Номер: US20170103582A1
Принадлежит: MICROSOFT TECHNOLOGY LICENSING, LLC

A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance). 1. A method , comprising:identifying a particular person within a field of view of a mobile device;detecting that a person associated with the mobile device has performed a gesture at a point in time coinciding with an electronically scheduled meeting between the person associated with the mobile device and the particular person;acquiring virtual data associated with an augmented reality environment displayed to the particular person in response to detecting that the person associated with the mobile device has performed the gesture at the point in time coinciding with the electronically scheduled meeting between the person associated with the mobile device and the particular person; anddisplaying the virtual data using the mobile device.2. The method of claim 1 , wherein:the detecting that the person associated with the mobile device has performed the gesture at the point in time coinciding with the electronically scheduled meeting between the person and the particular person includes acquiring an electronic calendar for the person, the electronic calendar includes the electronically ...

Подробнее
14-03-2017 дата публикации

Executable virtual objects associated with real objects

Номер: US0009594537B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
29-12-2016 дата публикации

AUGMENTED REALITY VIRTUAL MONITOR

Номер: US20160379417A1
Принадлежит: Microsoft Technology Licensing, LLC

A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.

Подробнее
12-09-2017 дата публикации

Indicating out-of-view augmented reality images

Номер: US0009761057B2

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

Подробнее
21-05-2013 дата публикации

Mapping a natural input device to a legacy system

Номер: US0008448094B2

Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.

Подробнее
13-02-2014 дата публикации

AUGMENTED REALITY DISPLAY OF SCENE BEHIND SURFACE

Номер: US2014043433A1
Принадлежит:

Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display.

Подробнее
13-10-2020 дата публикации

Constructing augmented reality environment with pre-computed lighting

Номер: US0010803670B2

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
13-09-2016 дата публикации

Mixed reality interactions

Номер: US0009443354B2

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

Подробнее
05-05-2015 дата публикации

Recognition of image on external display

Номер: US0009024844B2

Embodiments are disclosed that relate to the recognition via a see-through display system of an object displayed on an external display device at which a user of the see-through display system is gazing. For example, one embodiment provides a method of operating a see-through display system comprising acquiring an image of an external display screen located in the background scene via an outward facing image sensor, determining via a gaze detection subsystem a location on the external display screen at which the user is gazing, obtaining an identity of an object displayed on the external display screen at the location determined, and performing an action based upon the identity of the object.

Подробнее
30-08-2016 дата публикации

Depth of field control for see-thru display

Номер: US0009430055B2

One embodiment provides a method for controlling a virtual depth of field perceived by a wearer of a see-thru display device. The method includes estimating the ocular depth of field of the wearer and projecting virtual imagery with a specified amount of blur. The amount of blur is determined as a function of the ocular depth of field. Another embodiment provides a method for controlling an ocular depth of field of a wearer of a see-thru display device. This method includes computing a target value for the depth of field and increasing the pixel brightness of the virtual imagery presented to the wearer. The increase in pixel brightness contracts the wearer's pupils and thereby deepens the depth of field to the target value.

Подробнее
19-12-2017 дата публикации

Intelligent gameplay photo capture

Номер: US0009848106B2

Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.

Подробнее
29-12-2016 дата публикации

SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD

Номер: US20160378197A1
Принадлежит: Microsoft Technology Licensing, LLC

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.

Подробнее
24-10-2017 дата публикации

Augmented reality display of scene behind surface

Номер: US0009799145B2

Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display.

Подробнее
23-08-2016 дата публикации

Local rendering of text in image

Номер: US0009424767B2

Various embodiments are disclosed that relate to enhancing the display of images comprising text on various computing device displays. For example, one disclosed embodiment provides, on a computing device, a method of displaying an image, the method including receiving from a remote computing device image data representing a non-text portion of the image, receiving from the remote computing device unrendered text data representing a text portion of the image, rendering the unrendered text data based upon local contextual rendering information to form locally rendered text data, compositing the locally rendered text data and the image data to form a composited image, and providing the composited image to a display.

Подробнее
06-03-2018 дата публикации

Method to control perspective for a camera-controlled computer

Номер: US0009910509B2

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

Подробнее
09-05-2023 дата публикации

Pump and method for producing sliding layer

Номер: CN116085259A
Принадлежит:

The invention relates to a pump, in particular a vacuum pump, comprising: a sliding layer wherein the sliding layer comprises an oxide layer, in particular formed by anodic oxidation in an acidic electrolyte, and a polymer-based sealant, in particular based on a fluoropolymer, and wherein the oxide layer is at least partially covered by and/or impregnated with the sealant. Furthermore, the invention relates to a method for producing a sliding layer, comprising the following steps: a) producing an oxide layer in an electrolyte, preferably comprising oxalic acid, in particular by anodic oxidation; and b) coating the oxide layer with a sealant.

Подробнее
19-06-2014 дата публикации

Method to Control Perspective for a Camera-Controlled Computer

Номер: US20140168075A1
Принадлежит:

Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control. 1. A method for changing a perspective of a virtual scene displayed on a display device , comprising:receiving data captured by a capture device, the capture device capturing movement or position of at least part of a user or an object controlled by the user;analyzing the data to determine that the user or the object moved in a direction; andin response to determining that the user or the object moved in the direction, modifying the perspective of the virtual scene displayed on the display device by moving the perspective of the virtual scene in the direction that the user or the object moved.2. The method of claim 1 , wherein analyzing the data to determine that the user or the object moved in the direction comprises determining that the user or the object moved to the user's left; andwherein modifying the perspective of the virtual comprises moving the perspective of the virtual scene to the user's left.3. The method of claim 1 , further comprising:magnifying a text displayed on the display device in response to determining that the user has moved away from the display device.4. The method of claim 3 , further comprising:maintaining a size of at least a portion of the virtual scene while magnifying the text.5. The method of claim 3 , further comprising:maintaining a size of a second text while magnifying the text.6. The method of claim 1 , further comprising:shrinking a text displayed on the display device in response to determining that the user has moved closer to the display device.7 ...

Подробнее
08-12-2011 дата публикации

AUTOMATIC DEPTH CAMERA AIMING

Номер: US20110299728A1
Принадлежит: MICROSOFT CORPORATION

Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.

Подробнее
06-10-2016 дата публикации

Combining Gestures Beyond Skeletal

Номер: US20160291700A1
Принадлежит:

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

Подробнее
22-05-2018 дата публикации

Multi-input user authentication on display device

Номер: US0009977882B2

Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

Подробнее
23-06-2020 дата публикации

Combining gestures beyond skeletal

Номер: US0010691216B2

Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

Подробнее
09-02-2017 дата публикации

CONSTRUCTING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING

Номер: US20170039773A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device. 1. In a display device , a method of displaying an augmented reality image comprising lighting effects , the method comprising:receiving image data, the image data capturing an image of a local environment of the display device;identifying a physical feature of the local environment via the image data;constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect; andoutputting the augmented reality image to the display device.2. The method of claim 1 , wherein identifying a physical feature of the local environment comprises performing a mesh analysis of the local environment.3. The method of claim 1 , wherein the physical feature comprises one or ...

Подробнее
15-11-2016 дата публикации

Augmented reality virtual monitor

Номер: US0009497501B2

A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.

Подробнее
26-05-2015 дата публикации

Matching physical locations for shared virtual experience

Номер: US0009041739B2

Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.

Подробнее
08-05-2014 дата публикации

MIXED-REALITY ARENA

Номер: US20140125698A1
Принадлежит:

A computing system comprises a see-through display device, a logic subsystem, and a storage subsystem storing instructions. When executed by the logic subsystem, the instructions display on the see-through display device a virtual arena, a user-controlled avatar, and an opponent avatar. The virtual arena appears to be integrated within a physical space when the physical space is viewed through the see-through display device. In response to receiving a user input, the instructions may also display on the see-through display device an updated user-controlled avatar. 1. A computing system providing a mixed-reality fighting game , the computing system comprising:a see-through display device;a logic subsystem; display on the see-through display device a virtual arena, a user-controlled avatar, and an opponent avatar, the virtual arena integrated within a physical space when the physical space is viewed through the see-through display device; and', 'in response to receiving a user input, display on the see-through display device an updated user-controlled avatar based on the user input., 'a storage subsystem storing instructions that, when executed by the logic subsystem2. The computing system of claim 1 , wherein a position of the user-controlled avatar is dynamically updated based on a position of a user providing the user input.3. The computing system of claim 1 , wherein a position of the user-controlled avatar is updated independently from a position of a user providing the user input.4. The computing device of claim 1 , wherein an appearance of the user-controlled avatar is derived from an appearance of a user providing the user input.5. The computing system of claim 1 , wherein the user input is received via a gesture input detection device configured to observe a gesture of a user providing the user input.6. The computing system of claim 1 , wherein the user input is received via a game controller.7. The computing system of claim 1 , wherein the user input is ...

Подробнее
08-11-2016 дата публикации

Skeletal control of three-dimensional virtual world

Номер: US0009489053B2

A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.

Подробнее
01-03-2012 дата публикации

GESTURE RECOGNIZER SYSTEM ARCHITECTURE

Номер: US20120050157A1
Принадлежит: MICROSOFT CORPORATION

Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data. 1. A method for providing recognition of gestures made by a user using a gesture filter representing a gesture , comprising:receiving information indicative of a user motion or pose, the information being captured by a camera;determining an output of the gesture filter based on the information; andsending the output to a first and a second application.2. The method of claim 1 , further comprising:receiving second information indicative of a user motion or pose, the second information being captured by the camera;determining a second output of the gesture filter based on the second information; andsending the first and second applications the second output.3. The method of claim 1 , further comprising:receiving second information indicative of sound captured by a microphone; andsending an indication of the second information to the first and second applications.4. The method of claim 1 , wherein the gesture filter has a plurality of contexts claim 1 , and a parameter of each context of the base information about the gesture is unique.5. The method of claim 1 , wherein the gesture filter comprises a parameter claim 1 , and further comprising:setting a value for the parameter in response to receiving data captured by the depth camera indicative of a change in the user's fatigue, an ...

Подробнее
07-06-2012 дата публикации

Managing Virtual Ports

Номер: US20120144348A1
Принадлежит: MICROSOFT CORPORATION

Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. 1. A method for managing a gesture based computing environment , comprising:receiving a plurality of depth images of a capture area;generating a model of a first user based on at least part of the first user being depicted in one of the depth images and based at least in part on depth values of pixels of one of the depth images;associating the first user with a primary virtual port, the primary virtual port having associated therewith a feature that a secondary virtual port does not have, a user of the computing environment being able to be bound or unbound to the primary virtual port, the user being bound to the primary virtual port being indicative of the user being able to provide input to the computing environment via the primary virtual port;identifying a second user based on at least part of the second user being depicted in one of the depth images;associating the second user with the secondary virtual port in response to identifying the second user;disassociating the first user from the primary virtual port in response to determining that the first user is no longer sufficiently depicted in a subsequent depth image based at least in part on depth values of pixels in the subsequent depth image; andupdating an association of the second user from the secondary virtual port to the primary virtual port in response to dissociating the first user from the primary virtual port.2. The method of claim 1 , wherein determining that the first user is no longer sufficiently depicted in the subsequent depth image based at ...

Подробнее
28-06-2012 дата публикации

INTERACTING WITH A COMPUTER BASED APPLICATION

Номер: US20120165096A1
Принадлежит: MICROSOFT CORPORATION

A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. 1. A method for interacting with a computer based application , comprising:performing the computer based application including interacting with one or more actively engaged users;automatically sensing one or more physical properties of one or more entities not actively engaged with the computer based application;determining that the one or more entities not actively engaged with the computer based application have performed a predetermined action;automatically changing a runtime condition of the computer based application in response to determining that one or more entities not actively engaged with the computer based application have performed the predetermined action; andautomatically reporting the changing of the runtime condition in a user interface of the computer based application.2. The method of claim 1 , wherein:the automatically sensing one or more physical properties includes sensing a depth image;the predetermined action is a gesture; andthe determining that the one or more entities not actively engaged with the computer based application have performed the predetermined action includes using the depth image to identify the gesture.3. The method ...

Подробнее
22-11-2012 дата публикации

DETERMINE INTENDED MOTIONS

Номер: US20120293518A1
Принадлежит: MICROSOFT CORPORATION

It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like. 1. A system for modifying data representative of captured motion , the method comprising:a processor; and receive image data of a scene, the image data including data representative of captured motion, the image data having been captured with a camera;', 'generate a model of the captured motion based on the image data;', 'modify at least a portion of a size of the model to correspond to a digital representation of the model; and', 'render an avatar using the digital representation., 'a memory communicatively coupled to the processor when the system is operational, the memory bearing processor-executable instructions that, when executed on the processor, cause the system to at least2. The system of claim 1 , wherein the captured motion corresponds to a first user and the image data includes data representative of a second captured motion of a second user claim 1 , and wherein the memory further bears processor-executable instructions that claim 1 , when executed on the processor claim 1 , cause the system to at least:generate a second model based of the second captured motion based on the image data;modify at ...

Подробнее
29-11-2012 дата публикации

AVATARS OF FRIENDS AS NON-PLAYER-CHARACTERS

Номер: US20120302351A1
Принадлежит: MICROSOFT CORPORATION

In accordance with one or more aspects, for a particular user one or more other users associated with that particular user are identified based on a social graph of that particular user. An avatar of at least one of the other users is obtained and included as a non-player-character in a game being played by that particular user. The particular user can provide requests to interact with the avatar of the second user (e.g., calling out the name of the second user, tapping the avatar of the second user on the shoulder, etc.), these requests being invitations for the second user to join in a game with the first user. An indication of such an invitation is presented to the second user, which can, for example, accept the invitation to join in a game with the first user. 1. A method comprising:identifying, based on a social graph of a first user, one or more other users associated with the first user;obtaining, for at least one of the one or more other users, an avatar of the other user; andincluding, as non-player-characters in a game being played by the first user, the obtained avatars of each of the at least one of the one or more other users.2. A method as recited in claim 1 , the including comprising including the obtained avatars as non-player-characters cheering on an avatar of the first user in the game.3. A method as recited in claim 1 , the including comprising including one of the obtained avatars as a ghost avatar following a path in the game that the first user took during a previous playing of the game.4. A method as recited in claim 1 , the including comprising including multiple copies of an obtained avatar as a dead avatar at a location in the game where the obtained avatar died while the game was previously played by the other user having the obtained avatar.5. A method as recited in claim 1 , the first user being logged into an online gaming service claim 1 , and at least one of the one or more other users including a user that is not currently logged ...

Подробнее
06-12-2012 дата публикации

EMOTION-BASED USER IDENTIFICATION FOR ONLINE EXPERIENCES

Номер: US20120311032A1
Принадлежит: MICROSOFT CORPORATION

Emotional response data of a particular user, when the particular user is interacting with each of multiple other users, is collected. Using the emotional response data, an emotion of the particular user when interacting with each of multiple other users is determined. Based on the determined emotions, one or more of the multiple other users are identified to share an online experience with the particular user. 1. A method comprising:determining, for each of multiple other users, an emotion of a first user when interacting with the other user; andidentifying, based at least in part on the determined emotions, one or more of the multiple other users to share an online experience with the first user.2. A method as recited in claim 1 , further comprising:generating, based on the determined emotions, a score for each of the multiple other users; andpresenting identifiers of one or more of the multiple other users having the highest scores.3. A method as recited in claim 1 , the determining comprising determining the emotion of the first user based on emotional responses of the first user during interaction of the first user with the other user during another online experience with the other user.4. A method as recited in claim 1 , the determining comprising determining the emotion of the first user based on emotional responses of the first user during interaction of the first user with the other user during an in-person experience with the other user.5. A method as recited in claim 1 , the determining comprising determining the emotion of the first user based on data indicating emotional responses of the first user in communications between the first user and the other user.6. A method as recited in claim 1 , the determining comprising determining claim 1 , for each of multiple types of experiences with each of multiple other users claim 1 , an emotion of the first user when interacting with the other user with the type of experience claim 1 , the identifying comprising ...

Подробнее
27-12-2012 дата публикации

Directed Performance In Motion Capture System

Номер: US20120326976A1
Принадлежит: MICROSOFT CORPORATION

Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person. 1. A motion capture system , comprising:a depth camera system, the depth camera system obtains images of a field of view;a display; anda processor in communication with the depth camera system and the display, the processor executes instructions to:display a virtual space comprising an avatar on the display, provide directions to a person, the person performs movements in the field of view in a first time period in response to the directions, process the images to detect the movements of the person, update the virtual space so that the avatar provides a performance, the avatar exhibits a trait and moves correspondingly to the movements of the person in real time as the person performs the movements in the performance, and provide a play back of the performance in a second time period, the avatar exhibits a modification to the trait and moves correspondingly to the movements of the person in the play back of the performance.2. The motion capture system of claim 1 , wherein:the trait comprises a costume of the avatar.3. The motion capture system of claim 2 , wherein:the costume of the avatar is ...

Подробнее
27-12-2012 дата публикации

TOTAL FIELD OF VIEW CLASSIFICATION FOR HEAD-MOUNTED DISPLAY

Номер: US20120327116A1
Принадлежит: MICROSOFT CORPORATION

Virtual images are located for display in a head-mounted display (HMD) to provide an augment reality view to an HMD wearer. Sensor data may be collected from on-board sensors provided on an HMD. Additionally, other day may be collected from external sources. Based on the collected sensor data and other data, the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be determined. After resolving the HMD wearer's head position, the HMD wearer's total field of view (TFOV) may be classified into regions. Virtual images may then be located in the classified TFOV regions to locate the virtual images relative to the HMD wearer's body and surrounding environment. 1. One or more computer storage media storing computer-useable instructions that , when used by one or more computing devices , cause the one or more computing device to perform a method , the method comprising:receiving sensor data from one or more head-mounted display (HMD) on-board sensors;using the sensor data to determine an HMD wearer's head position and rotation relative to the HMD wearer's body and an environment surrounding the HMD wearer;classifying two or more regions within the HMD wearer's total field of view (TFOV) based on one or more pre-determined rules and the HMD wearer's head position and rotation relative to the HMD wearer's body and an environment surrounding the HMD wearer; andlocating virtual images to be displayed by the HMD based on classifying the two or more regions within the HMD wearer's TFOV.2. The one or more computer storage media of claim 1 , wherein the one or more HMD on-board sensors comprise one or more selected from the following: a GPS sensor claim 1 , an inertial measurement unit sensor claim 1 , a depth sensor claim 1 , a camera claim 1 , an eye tracking sensor claim 1 , a microphone claim 1 , and a biometric sensor.3. The one or more computer storage media of claim 1 , wherein the method further comprises: ...

Подробнее
10-01-2013 дата публикации

PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING

Номер: US20130013093A1
Принадлежит: MICROSOFT CORPORATION

One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. 19-. (canceled)10. A method comprising:identifying a user of an online service;identifying multiple additional users that are friends of the user; andidentifying, by one or more devices and based at least in part on one or more physical characteristics of the user and one or more physical characteristics of at least one of the multiple additional users, at least one of the multiple additional users with which to share an online experience with the user.11. A method as recited in claim 10 , the one or more physical characteristics of the user including at least one physical characteristic that is detected during an initialization process and stored as associated with the user.12. A method as recited in claim 11 , the at least one physical characteristic that is detected during the initialization process being stored based on a user id used by the user with the online service.13. A method as recited in claim 10 , the shared online experience comprising playing a multi-player game.14. A method as recited in claim 13 , the multiple additional users comprising friends of the user that are already playing the multi-player game.15. A method as recited in claim 10 , the multiple additional users comprising friends of the user that are currently logged into the online service.16. A method as recited in claim 10 , the one or more physical characteristics of the user including one or more physical attributes of the user claim 10 , and the one or more physical characteristics of each of the at least one of the ...

Подробнее
17-01-2013 дата публикации

PROVIDING ELECTRONIC COMMUNICATIONS IN A PHYSICAL WORLD

Номер: US20130016033A1
Принадлежит:

Techniques are provided for displaying electronic communications using a head mounted display (HMD). Each electronic communication may be displayed to represent a physical object that indentifies it as a specific type or nature of electronic communication. Therefore, the user is able to process the electronic communications more efficiently. In some aspects, computer vision allows a user to interact with the representation of the physical objects. One embodiment includes accessing electronic communications, and determining physical objects that are representative of at least a subset of the electronic communications. A head mounted display (HMD) is instructed how to display a representation of the physical objects in this embodiment. 1. A method comprising:accessing electronic communications;determining a plurality of physical objects that are representative of at least a subset of the electronic communications; andinstructing a head mounted display (HMD) to display a representation of the physical objects.2. The method of claim 1 , further comprising:tracking actions of a user wearing the HMD;determining how the user is intending to interact with a first of the physical objects that corresponds to a first of the electronic communications based on the actions; anddetermining how to alter presentation of the first physical object on the HMD based on the actions in order accomplish the user's intent to interact.3. The method of claim 1 , wherein the determining a plurality of physical objects includes determining a first of the physical objects that reflects content within a first of the electronic communications.4. The method of claim 1 , wherein the determining a plurality of physical objects includes determining a first of the physical objects based on a source of a first of the electronic communications.5. The method of claim 1 , further comprising:receiving a selection of a first of the physical objects; andproviding content of a first of the electronic ...

Подробнее
14-02-2013 дата публикации

PHYSICAL INTERACTION WITH VIRTUAL OBJECTS FOR DRM

Номер: US20130042296A1
Принадлежит:

Technology is provided for transferring a right to a digital content item based on one or more physical actions detected in data captured by a see-through, augmented reality display device system. A digital content item may be represented by a three-dimensional (D) virtual object displayed by the device system. A user can hold the virtual object in some examples, and transfer a right to the content item the object represents by handing the object to another user within a defined distance, who indicates acceptance of the right based upon one or more physical actions including taking hold of the transferred object. Other examples of physical actions performed by a body part of a user may also indicate offer and acceptance in the right transfer. Content may be transferred from display device to display device while rights data is communicated via a network with a service application executing remotely. 1. An augmented reality display device system for transferring a right to a digital content item comprising:an image generation unit for displaying an image of a virtual object representing a digital content item in a user field of view of the display system of a first user;one or more sensors of the display device system of the first user for generating data from which one or more physical actions of the first user is detected;one or more software controlled processors for receiving the data and identifying one or more physical actions performed by a body part of the first user with respect to the virtual object indicating a transfer request of a right to the digital content item; anda memory accessible by the one or more processors for storing software, and transfer rules for the digital content item; andthe one or more processors determining whether to transfer the right to the digital content item based on the transfer rules for the item.2. The system of further comprisingone or more network interfaces for communicating with another augmented reality display device ...

Подробнее
21-02-2013 дата публикации

Location based skins for mixed reality displays

Номер: US20130044129A1
Принадлежит: Individual

The technology provides embodiments for providing a location-based skin for a see-through, mixed reality display device system. In many embodiments, a location-based skin includes a virtual object viewable by a see-through, mixed reality display device system which has been detected in a specific location. Some location-based skins implement an ambient effect. The see-through, mixed reality display device system is detected to be present in a location and receives and displays a skin while in the location in accordance with user settings. User data may be uploaded and displayed in a skin in accordance with user settings. A location may be a physical space at a fixed position and may also be a space defined relative to a position of a real object, for example, another see-through, mixed reality display device system. Furthermore, a location may be a location within another location.

Подробнее
21-02-2013 дата публикации

PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE

Номер: US20130044130A1
Принадлежит:

The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. 1. One or more processor-readable storage devices having instructions encoded thereon for causing one or more software controlled processors to execute a method for providing contextual personal information by a mixed reality display device system , the method comprising:receiving and storing person selection criteria associated with a user wearing the mixed reality display device system;sending a request including a location of the user and the person selection criteria to a personal information service engine executing on one or more remote computer systems for a personal identification data set for each person sharing the location and satisfying the person selection criteria;receiving at least one personal identification data set from the personal identification service engine for a person sharing the location;determining whether the person associated with the at least one personal identification data set is in the field of view of the mixed reality display device system; andresponsive to the person associated with the at least one personal identification data set being in the field of view, ...

Подробнее
21-03-2013 дата публикации

Recognizing User Intent In Motion Capture System

Номер: US20130074002A1
Принадлежит: MICROSOFT CORPORATION

Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. 1. Tangible computer readable storage device having computer readable software embodied thereon for programming a processor to perform a method for recognizing an intent of a person to engage with an application in a motion capture system , the method comprising:receiving images of a field of view of the motion capture system;based on the images, distinguishing a person's body;based on the distinguishing, determining a probabilistic measure of an intent by the person to engage with the application;based on the probabilistic measure of the intent by the person to engage with the application, determining that the person does not intend to engage with the application at a first time and determining that the person intends to engage with the application at a second time;in response to determining that the person intends to engage with the application, allowing the person to engage with the application by automatically associating a profile and an avatar with the person in the application, and displaying the avatar in a virtual space on a display; andupdating the display by controlling the avatar as the person engages with the application by moving the ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20130083003A1
Принадлежит:

The technology described herein incudes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. 1. A method for presenting a personalized experience using a personal A/V apparatus , comprising:automatically determining a three dimensional location of the personal A/V apparatus, the personal A/V apparatus includes one or more sensors and a see-through display;automatically determining an orientation of the personal A/V apparatus;automatically determining a gaze of a user looking through the see-through display of the personal A/V apparatus;automatically determining a three dimensional location of a movable object in the field of view of the user through the see-through display, the determining of the three dimensional location of the movable object is performed using the one or more sensors;transmitting the three dimensional location of the personal A/V apparatus, the orientation, the gaze and the three dimensional location of the movable object to a server system;accessing weather data at the server system and automatically determining the effects of weather on the movement of the movable object;accessing course data at the server system;accessing the user's profile at the server system, the user's profile including information about the user's skill and past performance;automatically determining a recommend action on the movable object base on the three dimensional location of the movable object, the weather data and the course data;automatically adjusting the recommendation based on the user's skill and past performance;transmitting the adjusted recommendation to the personal A/V apparatus; anddisplaying the adjusted recommendation in the see-through display of the personal A/V apparatus.2. The method of claim 1 , further comprising:automatically tracking the movable object after the user ...

Подробнее
04-04-2013 дата публикации

CHANGING EXPERIENCE USING PERSONAL A/V SYSTEM

Номер: US20130083007A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user within a particular area;acquiring a user profile associated with the user;determining an enhancement package based on the user profile, the enhancement package includes one or more virtual objects that have not been previously viewed by the user;determining that the user is in a particular physiological state;adapting the one or more virtual objects based on the particular physiological state; anddisplaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular area.2. The method of claim 1 , further comprising:receiving and storing feedback from the user regarding the enhancement package, the user profile is updated to reflect the feedback from the user.3. The method of claim 1 , wherein:the adapting the one or more virtual objects includes substituting the one or more virtual objects with one or more different ...

Подробнее
04-04-2013 дата публикации

ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM

Номер: US20130083008A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user within a particular area;acquiring a user profile associated with the user;determining an enhancement package based on the user profile, the enhancement package includes one or more virtual objects that have not been previously viewed by the user;determining that the user is in a particular physiological state;adapting the one or more virtual objects based on the particular physiological state; anddisplaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular area.2. The method of claim 1 , further comprising:receiving and storing feedback from the user regarding the enhancement package, the user profile is updated to reflect the feedback from the user.3. The method of claim 1 , wherein:the adapting the one or more virtual objects includes substituting the one or more virtual objects with one or more different ...

Подробнее
04-04-2013 дата публикации

EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20130083009A1
Принадлежит:

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. 1. A method for presenting a personalized experience using a personal see-through A/V apparatus , comprising:accessing a location of the personal see-through A/V apparatus;automatically determining an exercise routine for a user based on the location; andpresenting a virtual image in the personal see-through A/V apparatus based on the exercise routine.2. The method of claim 1 , wherein:the presenting a virtual image in the personal see-through A/V apparatus includes presenting an image of someone performing the exercise routine based on data for a past performance of the exercise routine so that the user can see the virtual image inserted into a real scene viewed through the personal see-through A/V apparatus as the user performs the exercise routine.3. The method of claim 1 , wherein the presenting a virtual image in the personal see-through A/V apparatus based on the exercise routine includes:augmenting scenery on a route of the exercise routine so that the user can see additional scenery inserted into real scenery viewed through the personal see-through A/V apparatus.4. The method of claim 1 , further comprising recording data for a user wearing the personal see-through A/V apparatus for a period of time in which the user is not exercising claim 1 , wherein:the automatically determining an exercise routine for a user further includes:accessing a fitness goal for the user for the period of time including the time during which the user actions were recorded:determining the exercise routine based on the recorded user actions for the user to meet the fitness goal.5. The method of claim 4 , wherein the ...

Подробнее
04-04-2013 дата публикации

REPRESENTING A LOCATION AT A PREVIOUS TIME PERIOD USING AN AUGMENTED REALITY DISPLAY

Номер: US20130083011A1
Принадлежит:

Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view. 1. A method for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye , augmented reality (AR) display of a personal audiovisual (A/V) apparatus comprising:automatically identifying the personal A/V apparatus is within the physical location based on location data detected by the personal A/V apparatus;automatically identifying one or more objects in a display field of view of the near-eye, augmented reality display based on a three dimensional mapping of objects in the physical location;identifying user input indicating selection of a previous time period; anddisplaying three-dimensional (3D) virtual data associated with the previous time period based on the one or more objects in the display field of view and based on a user perspective associated with the display field of view.2. The method of further comprising:identifying a change in the display field of view; andupdating the displaying of the 3D virtual data associated with the previous time period based on the change in the display field of view.3. The method of further ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM WITH HOLOGRAPHIC OBJECTS

Номер: US20130083018A1
Принадлежит:

A system for generating an augmented reality environment using state-based virtual objects is described. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond with a unique set of triggering events different from those of any other state. The set of triggering events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each state of the plurality of different states may be associated with a different 3-D model or shape. The plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more triggering probabilities associated with the set of triggering events. 1. A method for generating an augmented reality environment using a mobile device , comprising:acquiring a particular file of a predetermined file format, the particular file includes information associated with one or more virtual objects, the particular file includes state information for each virtual object of the one or more virtual objects, the one or more virtual objects include a first virtual object, the first virtual object is associated with a first state and a second state different from the first state, the first state is associated with one or more triggering events, a first triggering event of the one or more triggering events is associated with the second state;setting the first virtual object into the first state;detecting the first triggering event;setting the first virtual object into the second state in response to the detecting the first triggering event, the setting the first virtual object into the second state includes acquiring one or more new triggering events different from the one or more triggering events; andgenerating and ...

Подробнее
04-04-2013 дата публикации

PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION

Номер: US20130083062A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user of the mobile device within a particular waiting area of an attraction;acquiring virtual object information associated with the attraction, the virtual object information includes one or more virtual objects; andgenerating and displaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular waiting area;detecting the user exiting the particular waiting area; anddisabling the one or more virtual objects in response to the detecting the user exiting the particular waiting area.2. The method of claim 1 , further comprising:identifying an age associated with the user, the acquiring virtual object information includes acquiring virtual object information associated with the attraction based on the age of the user.3. The method of claim 2 , further comprising:acquiring an attraction placement test, the attraction ...

Подробнее
04-04-2013 дата публикации

Service Provision Using Personal Audio/Visual System

Номер: US20130083063A1
Принадлежит:

A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user. 1. A method for use of head-mounted display device worn by a service consumer , the method comprising:receiving image data of a scene from at least one forward-facing camera;communicating the image data of the scene to a computing device of a service provider, the service provider generating data based on the image data of the scene, to assist the service consumer in performing an activity in the scene;receiving the data generated by the service provider; andcontrolling an augmented reality projection system based on the data generated by the service provider to project at least one augmented reality image to the service consumer, to assist the service consumer in performing the activity.2. The method of claim 1 , further comprising:obtaining gaze direction data, the gaze detection data indicating an area of the scene at which the service consumer gazes; andcommunicating the gaze direction data to the computing device of the service provider, to identify, at the computing device of the service provider, the area of the scene at which the service consumer gazes.3. The method of claim 1 , ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL APPARATUS PROVIDING RESOURCE MANAGEMENT

Номер: US20130083064A1
Принадлежит:

Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display. 1. A method for providing resource management using one or more personal audiovisual (A/V) apparatus including a near-eye , augmented reality (AR) display comprising:automatically identifying a resource based on image data of the resource captured by at least one capture device of at least one personal A/V apparatus and object reference data;automatically tracking a three dimensional (3D) space position of the resource in a location identified based on location data detected by the at least one personal A/V apparatus;automatically determining a property of the resource based on the image data of the resource;automatically tracking the property of the resource; andautomatically causing display of image data related to the resource in the near-eye, augmented reality display based on a notification criteria for the property associated with the resource.2. The method of wherein the property associated with the resource comprises at least one of the following:a quantity;an expiration date;a physical damage indicator;a quality control indicator; anda nutritional value.3. The method of further comprising:generating and storing a ...

Подробнее
04-04-2013 дата публикации

VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS

Номер: US20130083173A1
Принадлежит:

Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. 1. A method for providing a virtual spectator experience of an event for viewing with a near-eye , augmented reality display of a personal audiovisual (A/V) apparatus comprising:receiving in real time one or more positions of one or more event objects participating in the event occurring at a first location remote from a second location;mapping the one or more positions of the one or more event objects in the first 3D coordinate system for the first location to a second 3D coordinate system for a second location remote from the first location;determining a display field of view of a near-eye, augmented reality display of a personal A/V apparatus being worn by a user at the second location; andsending in real time 3D virtual data representing the one or more event objects which are within the display field of view to the personal A/V apparatus at the second location.2. The method of wherein the near-eye claim 1 , augmented reality display is a near-eye claim 1 , see-through claim 1 , augmented reality display.3. The method of further comprising receiving in real time 3D virtual data of the one or more event objects which include ...

Подробнее
04-04-2013 дата публикации

Sharing Games Using Personal Audio/Visual Apparatus

Номер: US20130084970A1
Принадлежит:

A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space. 1. A method for sharing a game , comprising:defining a characteristic of a game using a sensor of a first head-mounted display device, the characteristic is defined with respect to a physical environment of a user of the first head-mounted display device; andsharing the game, including the characteristic of the game, with at least a user of a second head-mounted display device via a network.2. The method of claim 1 , wherein:the sensor captures an image of the physical environment; andthe image of the physical environment is used to provide a model of a game space of the game.3. The method of claim 1 , further comprising:identifying one or more other selected users with whom the game is to be shared, the sharing is responsive to the identifying.4. The method of claim 1 , wherein:the characteristic comprises a location of the user in the physical environment, a game space of the game is linked to the location.5. The method of claim 1 , wherein:the characteristic comprises a desired size of a game space of the game;the sensor determines a size of the physical environment of the user; andthe method performed further comprises determining whether the size ...

Подробнее
04-04-2013 дата публикации

Personal Audio/Visual System Providing Allergy Awareness

Номер: US20130085345A1
Принадлежит: Individual

A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user.

Подробнее
18-04-2013 дата публикации

USER CONTROLLED REAL OBJECT DISAPPEARANCE IN A MIXED REALITY DISPLAY

Номер: US20130093788A1
Принадлежит:

The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system. 1. One or more processor-readable storage devices having instructions receiving metadata identifying one or more real objects in a field of view of the see-through display;', 'determining whether any of the one or more real objects satisfies user disappearance criteria; and', 'responsive to determining a first real object satisfies the user disappearance criteria, tracking image data to the first real object in the see-through display for causing disappearance of the first real object in the field of view of the see-through display., 'encoded thereon for causing one or more processors to execute a method for causing disappearance of a real object in a see-through display of a see-through, mixed reality display device system, the method comprising2. The one or more processor-readable storage devices of further comprisingdetecting another see-through display device system within a predetermined distance of the see-through, mixed reality display device system;based on being within the predetermined distance of the see-through, mixed reality display device system, receiving an identifier of a ...

Подробнее
18-04-2013 дата публикации

TOTAL FIELD OF VIEW CLASSIFICATION FOR HEAD-MOUNTED DISPLAY

Номер: US20130093789A1
Принадлежит:

Virtual images are located for display in a head-mounted display (HMD) to provide an augment reality view to an HMD wearer. Sensor data may be collected from on-board sensors provided on an HMD. Additionally, other day may be collected from external sources. Based on the collected sensor data and other data, the position and rotation of the HMD wearer's head relative to the HMD wearer's body and surrounding environment may be determined. After resolving the HMD wearer's head position, the HMD wearer's total field of view (TFOV) may be classified into regions. Virtual images may then be located in the classified TFOV regions to locate the virtual images relative to the HMD wearer's body and surrounding environment. 1. One or more computer storage media storing computer-useable instructions that , when used by one or more computing devices , cause the one or more computing device to perform a method , the method comprising:receiving sensor data from one or more head-mounted display (HMD) on-board sensors;using the sensor data to determine an HMD wearer's head position and rotation relative to the HMD wearer's body and an environment surrounding the HMD wearer;classifying two or more regions within the HMD wearer's total field of view (TFOV) based on one or more pre-determined rules and the HMD wearer's head position and rotation relative to the HMD wearer's body and an environment surrounding the HMD wearer; andlocating virtual images to be displayed by the HMD based on classifying the two or more regions within the HMD wearer's TFOV.2. The one or more computer storage media of claim 1 , wherein the one or more HMD on-board sensors comprise one or more selected from the following: a GPS sensor claim 1 , an inertial measurement unit sensor claim 1 , a depth sensor claim 1 , a camera claim 1 , an eye tracking sensor claim 1 , a microphone claim 1 , and a biometric sensor.3. The one or more computer storage media of claim 1 , wherein the method further comprises: ...

Подробнее
18-04-2013 дата публикации

ENHANCING A SPORT USING AN AUGMENTED REALITY DISPLAY

Номер: US20130095924A1
Принадлежит:

Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar. 1. A method for providing a personalized sport performance experience with three dimensional (3D) virtual data being displayed by a near-eye , augmented reality (AR) display of a personal audiovisual (A/V) apparatus comprising:automatically identifying a physical location which the personal A/V apparatus is within based on location data detected by the personal A/V apparatus;automatically identifying one or more 3D space positions of at least one or more sport objects in a sport performance area associated with the physical location based on a three dimensional mapping of objects in the sport performance area;accessing a memory for physical characteristics of a user and skills data for a sport stored for the user in user profile data;determining a physical movement recommendation by a processor for the user performing the sport based on the skills data for the sport, the physical characteristics of the user, and 3D space positions for at least the one or more sport objects; anddisplaying three-dimensional (3D) virtual data depicting one or more visual guides for ...

Подробнее
09-05-2013 дата публикации

SEE-THROUGH DISPLAY BRIGHTNESS CONTROL

Номер: US20130114043A1
Принадлежит:

The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes. 1. A method comprising:estimating a region at which a wearer of a see-through display is gazing using an eye-tracking camera;determining light intensity of the region at which the user is gazing; andadjusting brightness of the see-through display based on the light intensity of the region.2. The method of claim 1 , wherein the adjusting brightness of the see-through display based on the light intensity of the region includes:adjusting the opacity of the see-through display.3. The method of claim 1 , wherein the adjusting brightness of the see-through display based on the light intensity of the region includes:adjusting the intensity of light projected by the see-through display.4. The method of claim 1 , further comprising:determining a pupil size of the wearer, the adjusting brightness of the see-through display based on the light intensity of the region is further based on the pupil size of the wearer.5. The method of claim 4 , wherein the determining a pupil size of the wearer is performed using 3D imaging.6. The method of claim 1 , further comprising:determining a distance between the wearer's eyes and the see-through display based on 3D imaging, the adjusting brightness of the see-through display based is further based on the distance.7. The method of claim 1 , wherein the adjusting brightness of the see-through display is ...

Подробнее
23-05-2013 дата публикации

VIDEO COMPRESSION USING VIRTUAL SKELETON

Номер: US20130127994A1
Принадлежит:

Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. 1. A method for a computing system , comprising:receiving optical sensor information captured via one or more optical sensors, the optical sensor information imaging a scene including a human subject;processing the optical sensor information to model the human subject with a virtual skeleton;processing the optical sensor information to obtain surface information representing the human subject;transmitting the virtual skeleton to a remote computing device at a first frame rate; andtransmitting the surface information to the remote computing device at a second frame rate that is less than the first frame rate.2. The method of claim 1 , wherein the surface information includes visible spectrum information and/or depth information.3. The method of claim 1 , further comprising:identifying a high-interest region of the human subject; andprocessing the optical sensor information to obtain high-interest surface information representing the high-interest region of the human subject; andtransmitting the high-interest surface information to the remote computing device at a third frame rate that is greater than the second frame rate.4. The method of claim 3 , wherein the high-interest region of the human subject corresponds to a facial region of the human subject.5. The method of claim 3 , wherein the high-interest region of the human subject ...

Подробнее
30-05-2013 дата публикации

SHARED COLLABORATION USING HEAD-MOUNTED DISPLAY

Номер: US20130135180A1
Принадлежит:

Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user. 1. A shared collaboration system including a head-mounted display device operatively connected to a computing device , the head-mounted display device including a transparent display screen through which an active user may view a physical space , the shared collaboration system enabling the active user to interact with at least one additional user and with at least one collaboration item , the shared collaboration system comprising: receive observation information data representing the physical space from the head-mounted display device;', 'receive the at least one collaboration item;', 'receive additional user collaboration item input from the at least one additional user;', 'visually augment an appearance of the physical space as seen through the transparent display screen of the head-mounted display device to include an active user collaboration item representation; and', 'populate the active user collaboration item representation with the additional user collaboration item input from the at least one additional user., 'a collaboration engine program executed by a processor of the computing device, the collaboration engine program configured to2. The shared ...

Подробнее
06-06-2013 дата публикации

AUGMENTED REALITY WITH REALISTIC OCCLUSION

Номер: US20130141419A1
Принадлежит:

A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display. 1. A method of augmenting reality , the method comprising:receiving first observation information of a first physical space from a first head-mounted display device, the first head-mounted display device including a first see-through display configured to visually augment an appearance of the first physical space to a user viewing the first physical space through the first see-through display;receiving second observation information of a second physical space from a second head-mounted display device, the second head-mounted display device including a second see-through display configured to visually augment an appearance of the second physical space to a user viewing the second physical space through the second see-through display;mapping a shared virtual reality environment to the first physical space and the second physical space based on the first observation information and the second observation information, the shared virtual reality environment including a virtual object;sending first augmented reality display information to the first head mounted display, the first augmented reality display information configured to display the virtual object via the first see-through display with occlusion relative to a real world object from a perspective of the first see-through display.2. The method of claim 1 , where the first physical space and the second physical space are congruent claim 1 , and where the first observation information is from a first perspective of the first see-through display and the second observation information is from a second perspective of the second see-through display claim 1 , the first perspective ...

Подробнее
06-06-2013 дата публикации

AUGMENTED REALITY VIRTUAL MONITOR

Номер: US20130141421A1
Принадлежит:

A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display. 1. A head-mounted display , comprising:a see-through display configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display; anda virtual reality engine configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.2. The head-mounted display of claim 1 , where the virtual reality engine is further configured to play a video stream on the virtual monitor.3. The head-mounted display of claim 2 , further comprising a speaker claim 2 , and where the virtual reality engine is further configured to cause the speaker to play an audio stream synced to the video stream.4. The head-mounted display of claim 3 , where the virtual reality engine is further configured to modulate a volume of the audio stream inversely proportional to a distance between the see-through display and a physical-space location at which the virtual monitor appears to be located to the user viewing the physical space through the see-through display.5. The head-mounted display of claim 3 , where the virtual reality engine is further configured to modulate a volume of the audio stream in proportion to a directness that the see-through display is viewing a physical-space location at which the virtual monitor appears to be located to the user viewing the physical space through the see-through display.6. The head-mounted ...

Подробнее
13-06-2013 дата публикации

Connecting Head Mounted Displays To External Displays And Other Communication Networks

Номер: US20130147686A1
Принадлежит:

An audio and/or visual experience of a see-through head-mounted display (HMD) device, e.g., in the form of glasses, can be moved to target computing device such as a television, cell phone, or computer monitor to allow the user to seamlessly transition the content to the target computing device. For example, when the user enters a room in the home with a television, a movie which is playing on the HMD device can be transferred to the television and begin playing there without substantially interrupting the flow of the movie. The HMD device can inform the television of a network address for accessing the movie, for instance, and provide a current status in the form of a time stamp or packet identifier. Content can also be transferred in the reverse direction, to the HMD device. A transfer can occur based on location, preconfigured settings and user commands. 1. A head-mounted display device , comprising:at least one see-through lens;at least one image projection source associated with the at least one see-through lens; provides an experience comprising at least one of audio and visual content at the head-mounted display device;', 'determines if a condition is met to provide a continuation of at least part of the experience at a target computing device; and', 'if the condition is met, communicates data to the target computing device to allow the target computing device to provide the continuation of the at least part of the experience, the continuation of the at least part of the experience comprises at least one of the audio and the visual content., 'at least one control circuit in communication with the at least one image projection source, the at least one control circuit2. The head-mounted display device of claim 1 , wherein:the at least one control circuit determines that a condition is met to provide a continuation of the visual content at one target computing device and a continuation of the audio content at another computing device.3. The head-mounted display ...

Подробнее
04-07-2013 дата публикации

Touch and social cues as inputs into a computer

Номер: US20130169682A1
Принадлежит: Individual

A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).

Подробнее
04-07-2013 дата публикации

IMPLICIT SHARING AND PRIVACY CONTROL THROUGH PHYSICAL BEHAVIORS USING SENSOR-RICH DEVICES

Номер: US20130174213A1
Принадлежит:

A system for automatically sharing virtual objects between different mixed reality environments is described. In some embodiments, a see-through head-mounted display device (HMD) automatically determines a privacy setting associated with another HMD by inferring a particular social relationship with a person associated with the other HMD (e.g., inferring that the person is a friend or acquaintance). The particular social relationship may be inferred by considering the distance to the person associated with the other HMD, the type of environment (e.g., at home or work), and particular physical interactions involving the person (e.g., handshakes or hugs). The HMD may subsequently transmit one or more virtual objects associated with the privacy setting to the other HMD. The HMD may also receive and display one or more other virtual objects from the other HMD based on the privacy setting. 1. A method for automatically sharing virtual objects , comprising:generating one or more virtual objects associated with a first mixed reality environment, the first mixed reality environment is associated with a first computing device;detecting a second computing device;identifying a first person associated with the second computing device;automatically determining a privacy setting associated with the second computing device, the automatically determining a privacy setting includes inferring a particular social relationship with the first person;determining whether to receive one or more other virtual objects from the second computing device based on the privacy setting;receiving the one or more other virtual objects from the second computing device; anddisplaying one or more virtual images associated with the one or more virtual objects and at least a subset of the one or more other virtual objects.2. The method of claim 1 , wherein:each of the one or more virtual objects is associated with one or more output privacy settings.3. The method of claim 2 , further comprising: ...

Подробнее
11-07-2013 дата публикации

GENERATING METADATA FOR USER EXPERIENCES

Номер: US20130177296A1
Принадлежит:

A system and method for efficiently managing life experiences captured by one or more sensors (e.g., video or still camera, image sensors including RGB sensors and depth sensors). A “life recorder” is a recording device that continuously captures life experiences, including unanticipated life experiences, in image, video, and/or audio recordings. In some embodiments, video and/or audio recordings captured by a life recorder are automatically analyzed, tagged with a set of one or more metadata, indexed, and stored for future use. By tagging and indexing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later. 1. A method for managing data captured by a recording device , comprising:acquiring a recording of user experiences captured throughout one or more days by the recording device;generating context information, the context information including information associated with a user of the recording device, the context information including information associated with the recording device, the context information generated by one or more sensors;identifying a particular situation from the recording;detecting a tag event, the step of detecting includes automatically determining whether one or more rules associated with the recording device are satisfied by the context information and the particular situation, said one or more rules are configured for determining when to generate a set of one or more metadata tags for the recording;automatically generating a set of one or more metadata tags for the recording responsive to the step of detecting, each of the one or more metadata tags including one or more keywords that describe the recording related to a location associated with the recording device, a timestamp associated with the recording, an event associated with the user, and/or a situation associated with the recording, the set ...

Подробнее
18-07-2013 дата публикации

STYLUS COMPUTING ENVIRONMENT

Номер: US20130181953A1
Принадлежит: MICROSOFT CORPORATION

A stylus computing environment is described. In one or more implementations, one or more inputs are detected using one or more sensors of a stylus. A user that has grasped the stylus, using fingers of the user's hand, is identified from the received one or more inputs. One or more actions are performed based on the identification of the user that was performed using the one or more inputs received from the one or more sensors of the stylus 1. A method implemented by one or more modules at least partially in hardware , the method comprising:receiving one or more inputs detected using one or more sensors of a stylus;identifying a user that has grasped the stylus, using fingers of the user's hand, from the received one or more inputs; andperforming one or more actions based on the identification of the user that was performed using the one or more inputs received from the one or more sensors of the stylus2. A method as described in claim 1 , wherein the receiving claim 1 , the identifying claim 1 , and the performing are performed by the one or more modules as part of a computing device that is communicatively coupled to the stylus.3. A method as described in claim 1 , wherein the receiving claim 1 , the identifying claim 1 , and the performing are performed by the one or more modules disposed within a housing of the stylus.4. A method as described in claim 1 , wherein the receiving includes detecting one or more biometric characteristics of the user using the sensors of the stylus.5. A method as described in claim 1 , wherein the receiving includes detecting handwriting of the user of the stylus using the one or more sensors.6. A method as described in claim 5 , wherein the detecting is performed by a computing device that is communicatively coupled to the stylus and upon which the handwriting is received through movement of the stylus.7. A method as described in claim 1 , wherein the receiving includes detecting one or more orientations of the stylus using the one or ...

Подробнее
25-07-2013 дата публикации

RECOGNITION OF IMAGE ON EXTERNAL DISPLAY

Номер: US20130187835A1
Принадлежит:

Embodiments are disclosed that relate to the recognition via a see-through display system of an object displayed on an external display device at which a user of the see-through display system is gazing. For example, one embodiment provides a method of operating a see-through display system comprising acquiring an image of an external display screen located in the background scene via an outward facing image sensor, determining via a gaze detection subsystem a location on the external display screen at which the user is gazing, obtaining an identity of an object displayed on the external display screen at the location determined, and performing an action based upon the identity of the object. 1. A method of operating a see-through display system , the see-through display system comprising a see-through display screen , a gaze detection subsystem configured to determine a direction of gaze of each eye of the user , and an outward facing image sensor configured to acquire images of a background scene relative to a user of the see-through display system , the method comprising:acquiring an image of an external display screen located in the background scene via the outward facing image sensor;determining via the gaze detection subsystem a location on the external display screen at which the user is gazing;obtaining an identity of an object displayed on the external display screen at the location determined; andperforming an action based upon the identity of the object.2. The method of claim 1 , wherein obtaining the identity of the object comprises sending image information regarding the object to a remote computing device claim 1 , and receiving the identity from the remote computing device.3. The method of claim 2 , wherein the remote computing device is not in control of the external display screen.4. The method of claim 3 , wherein performing an action comprises displaying contextual information related to the object on the see-through display.5. The method of claim ...

Подробнее
25-07-2013 дата публикации

WEARABLE DISPLAY DEVICE CALIBRATION

Номер: US20130187943A1
Принадлежит:

In embodiments of wearable display device calibration, a first display lens system forms an image of an environment viewed through the first display lens system. A second display lens system also forms the image of the environment viewed through the second display lens system. The first display lens system emits a first reference beam and the second display lens system emits a second reference beam. The first display lens system then captures a reflection image of the first and second reference beams. The second display lens system also captures a reflection image of the first and second reference beams. An imaging application is implemented to compare the reflection images to determine a misalignment between the first and second display lens systems, and then apply an alignment adjustment to align the image of the environment formed by each of the first and second display lens systems. 1. A system , comprising:a first display lens system configured to form an image of an environment viewed through the first display lens system, the first display lens system further configured to emit a first reference beam and capture a first reflection image of the first reference beam;a second display lens system configured to form the image of the environment viewed through the second display lens system, the second display lens system further configured to emit a second reference beam and capture a second reflection image of the second reference beam;an imaging application configured to:compare the first the second reflection images to determine a misalignment between the first and second display lens systems; andapply an alignment adjustment to align the image of the environment formed by each of the first and second display lens systems.2. A system as recited in claim 1 , wherein:the first and second reflection images include both the reflection of the first and second reference beams; andthe imaging application is configured to compare the reflections of both the first and ...

Подробнее
01-08-2013 дата публикации

Executable virtual objects associated with real objects

Номер: US20130194164A1
Принадлежит: Individual

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
01-08-2013 дата публикации

Virtual environment generating system

Номер: US20130194259A1
Принадлежит: Individual

A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.

Подробнее
01-08-2013 дата публикации

Coordinate-system sharing for augmented reality

Номер: US20130194304A1
Принадлежит: Individual

A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.

Подробнее
01-08-2013 дата публикации

HEAD-MOUNTED DISPLAY DEVICE TO MEASURE ATTENTIVENESS

Номер: US20130194389A1
Принадлежит:

A method for assessing a attentiveness to visual stimuli received through a head-mounted display device. The method employs first and second detectors arranged in the head-mounted display device. An ocular state of the wearer of the head-mounted display device is detected with the first detector while the wearer is receiving a visual stimulus. With the second detector, the visual stimulus received by the wearer is detected. The ocular state is then correlated to the wearer's attentiveness to the visual stimulus. 1. A method for assessing attentiveness to visual stimuli , comprising:with a first detector arranged in a head-mounted display device, detecting an ocular state of the wearer of the head-mounted display device while the wearer is receiving a visual stimulus;with a second detector arranged in the head-mounted display device, detecting the visual stimulus; andcorrelating the ocular state to the wearer's attentiveness to the visual stimulus.2. The method of further comprising reporting the wearer's attentiveness to the stimulus.3240. The method of wherein detecting the ocular state includes imaging the wearer's eye or more times per second.4. The method of wherein the visual stimulus includes real imagery in the wearer's field of view.5. The method of wherein the visual stimulus includes virtual imagery added to the wearer's field of view via the head-mounted display device.6. The method of wherein detecting the visual stimulus includes depth sensing.7. The method of wherein the visual stimulus includes imagery mapped to a model accessible by the head-mounted display device claim 1 , and wherein detecting the visual stimulus includes:locating the wearer's line of sight within that model; andsubscribing to the model to identify the imagery that the wearer is sighting.8. The method of wherein the wearer's line of sight is located within the model based partly on positional data from one or more sensors arranged within the head-mounted display device.9. The ...

Подробнее
01-08-2013 дата публикации

MULTIPLAYER GAMING WITH HEAD-MOUNTED DISPLAY

Номер: US20130196757A1
Принадлежит: MICROSOFT CORPORATION

A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game. 1. A method for inviting a potential player to participate in a multiplayer game with a user , the multiplayer game displayed by a display of a user head-mounted display device , comprising:receiving user voice data from the user;determining that the user voice data is an invitation to participate in the multiplayer game;receiving eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data;associating the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data;matching a potential player account with the potential player;receiving an acceptance response from the potential player; andjoining the potential player account with a user account associated with the user in participating ...

Подробнее
01-08-2013 дата публикации

MATCHING PHYSICAL LOCATIONS FOR SHARED VIRTUAL EXPERIENCE

Номер: US20130196772A1
Принадлежит:

Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users. 1. A method for matching participants in a virtual multiplayer entertainment experience , the method comprising:receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience;receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located; andmatching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.2. The method of claim 1 , further comprising sending additional instructions to at least one of the two or more users to alter one or more characteristics of that user's physical space.3. The method of claim 2 , wherein sending additional instructions to at least one of the two or more users to alter one or more characteristics of that user's physical space further comprises sending instructions to at least one of the two or more users to move to a different physical space.4. The method of claim 1 , wherein each of the plurality of users are located in different physical spaces claim 1 , and wherein matching two or more users of the plurality of users further comprises matching two or more users based on a degree of similarity of characteristics of the physical spaces of the users.5. ...

Подробнее
08-08-2013 дата публикации

PRESENTATION TECHNIQUES

Номер: US20130201095A1
Принадлежит: MICROSOFT CORPORATION

Techniques involving presentations are described. In one or more implementations, a user interface is output by a computing device that includes a slide of a presentation, the slide having an object that is output for display in three dimensions. Responsive to receipt of one or more inputs by the computing device, how the object in the slide is output for display in the three dimensions is altered. 1. A method comprising:outputting a user interface by a computing device that includes a slide of a presentation, the slide having an object that is configured for output in three dimensions; andresponsive to receipt of one or more inputs by the computing device, altering how the object in the slide is output for display in the three dimensions.2. A method as described in claim 1 , wherein the object is output for display as a three-dimensional object in the three dimensions or output for display as a two-dimensional perspective of the three dimensions.3. A method as described in claim 1 , wherein the one or more inputs are received by the computing device from a controller that supports user interaction.4. A method as described in claim 3 , wherein the altering includes display of one or more indications in the user interface as part of the presentation claim 3 , the one or more indications describing which gestures were identified from the one or more inputs to perform the altering.5. A method as described in claim 3 , wherein the controller is configured as a mobile communications device having a touchscreen and one or more sensors configured to detect movement in three dimensions claim 3 , the one or more inputs provided by the one or more sensors that describe movement in the three dimensions.6. A method as described in claim 5 , wherein the one or more sensors are configured to detect movement in at least one of the three dimensions using pressure.7. A method as described in claim 3 , wherein the controller leverages one or more cameras such that a user is permitted ...

Подробнее
08-08-2013 дата публикации

INTEGRATED INTERACTIVE SPACE

Номер: US20130201276A1
Принадлежит: MICROSOFT CORPORATION

Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space. 1. A computer-implemented method , comprising:synchronizing a first camera at a first location and a second camera at a second location into a common reference system;generating an integrated interactive space using video data from the first camera and the second camera and based on the common reference system; andpresenting at least a portion of the integrated interactive space for display at one or more of the first location or the second location.2. A method as described in claim 1 , wherein the common reference system comprises a three-dimensional coordinate system in which images from the first location and the second location can be positioned.3. A method as described in claim 1 , wherein said synchronizing comprises:capturing, using the first camera and the second camera, images of fiducial markers placed at the first location and the second location;determining a position and orientation of the first camera and a position and orientation of the second camera by comparing attributes of the images of fiducial markers to known attributes of the fiducial markers; ...

Подробнее
15-08-2013 дата публикации

DISPLAY WITH BLOCKING IMAGE GENERATION

Номер: US20130208014A1
Принадлежит:

A blocking image generating system including a head-mounted display device having an opacity layer and related methods are disclosed. A method may include receiving a virtual image to be presented by display optics in the head-mounted display device. Lighting information and an eye-position parameter may be received from an optical sensor system in the head-mounted display device. A blocking image may be generated in the opacity layer of the head-mounted display device based on the lighting information and the virtual image. The location of the blocking image in the opacity layer may be adjusted based on the eye-position parameter. 1. A method for adjusting a location of a blocking image in an opacity layer , the opacity layer located in a head-mounted display device worn by a user , the head-mounted display device including display optics positioned between the opacity layer and an eye of the user , the blocking image preventing a portion of real-world light from reaching the eye of the user , comprising:receiving a virtual image to be presented by the display optics;receiving lighting information from an optical sensor system in the head-mounted display device;receiving an eye-position parameter;generating the blocking image in the opacity layer based on the lighting information and the virtual image; andadjusting the location of the blocking image in the opacity layer based on the eye-position parameter.2. The method of claim 1 , wherein the eye-position parameter comprises an estimated interpupillary distance and/or an estimated line of sight of the user.3. The method of claim 1 , wherein the eye-position parameter comprises a measured interpupillary distance and/or a measured line of sight of the user.4. The method of claim 1 , wherein the eye-position parameter comprises a position of the eye of the user within an eyebox formed by the display optics.5. The method of claim 1 , wherein adjusting the location of the blocking image in the opacity layer further ...

Подробнее
22-08-2013 дата публикации

THREE-DIMENSIONAL PRINTING

Номер: US20130215454A1
Принадлежит: MICROSOFT CORPORATION

Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. 1. A system comprising:a three-dimensional printer having a three-dimensional printing mechanism that is configured to form a physical object in three dimensions; anda computing device communicatively coupled to the three-dimensional printer, the computing device including a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device.2. A system as described in claim 1 , wherein the three-dimensional printing mechanism is configured to place preconfigured components within the object as part of forming the object.3. A system as described in claim 2 , wherein the preconfigured component is a processing system and the three-dimensional printing module is configured to program the processing system to perform one or more operations.4. A system as described in claim 3 , wherein the processing system of the object is configured to communicate a result of performance of the one or more operations to the computing device for further processing by the computing device.5. A system as described in claim 4 , wherein the processing system is programming to process signals received from one or more other preconfigured components of the object that are configured as sensors.6. A system as ...

Подробнее
31-10-2013 дата публикации

DISPLAYING A COLLISION BETWEEN REAL AND VIRTUAL OBJECTS

Номер: US20130286004A1
Принадлежит:

Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions. 1. A method for displaying a collision between a real object and a virtual object by an augmented reality display device system comprising:identifying a collision between a real object and a virtual object in a display field of view of an, augmented reality display based on a respective three dimensional (3D) space position associated with each object in the display field of view;determining at least one effect on at least one physical property of the real object due to the collision based on one or more physical properties of the real object and physical interaction characteristics for the collision;generating image data of the real object simulating the at least one effect on the at least one physical property of the real object; anddisplaying the image data of the real object registered to the real object.2. The method of claim 1 , the physical interaction characteristics including a velocity of at least one of the real object and the virtual object in the display field of view.3. The method of further comprising:determining at least one effect on at least one physical property of the virtual object due to the collision based on its physical properties and the physical interaction characteristics for the collision;modifying image data of the virtual object for ...

Подробнее
31-10-2013 дата публикации

PROXIMITY AND CONNECTION BASED PHOTO SHARING

Номер: US20130286223A1
Принадлежит: MICROSOFT CORPORATION

Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared. 1. One or more computer-readable storage media having stored thereon multiple instructions that , when executed by one or more processors of a device , cause the one or more processors to:receive a photo captured at the device;determine one or more other devices in close proximity to the device;determine a connection between the device and at least one of the one or more other devices; andautomatically share the photo with the at least one of the one or more other devices.2. One or more computer-readable storage media as recited in claim 1 , the connection comprising claim 1 , for each of the one or more other devices claim 1 , a user of the other device being included in a social network of a user of the device.3. One or more computer-readable storage media as recited in claim 1 , the multiple instructions further causing the one or more processors to:receive, from one of the one or more other devices, an indication that a user of the other device has rejected the photo; andshare, in response to the indication, the photo with no other of the one or more other devices.4. One or more computer-readable storage media as recited in claim 1 , the multiple instructions further causing the one or more processors to associate one or more controls with the photo claim 1 , the one or more controls restricting how the photo is shared.5. One or more computer-readable storage media as recited in claim 4 , the controls indicating properties and/or securities that the device is to have in order for the photo to be shared with the device.6. One or more computer-readable ...

Подробнее
07-11-2013 дата публикации

COLLABORATION ENVIRONMENT USING SEE THROUGH DISPLAYS

Номер: US20130293468A1
Принадлежит:

A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object. 1. A method for presenting a collaboration experience using a see through head mounted display apparatus , comprising:determining a three dimensional location of the apparatus, the apparatus includes one or more sensors and a see-through display;determining an orientation of the apparatus;determining a gaze of a wearer looking through the see-through display of the apparatus;determining a three dimensional location of at one or more users in the field of view of the user through the see-through display, the determining of the three dimensional location of the movable object is performed using the one or more sensors;receiving a selection of collaboration data and a selection of a collaboration environment within the field of view from the wearer;rendering virtual representations of the collaboration data in the field of view;determining persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment;outputting shared collaboration data in the form of virtual objects to users in the collaboration environment ...

Подробнее
07-11-2013 дата публикации

PRODUCT AUGMENTATION AND ADVERTISING IN SEE THROUGH DISPLAYS

Номер: US20130293530A1
Принадлежит:

An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home. 1. A method providing augmentation information to a wearer for a product in the field of view of a wearer , comprising:receiving input data from a wearer of a see through head mounted display device;determining a gaze direction in a field of view of the wearer from the input data;determining a location of the wearer;retrieving personal information of the wearer;identifying real world objects in the field of view of a wearer in the see through head mounted display device;retrieving augmentation data for the real world objects and matching objects in the field of view of the wearer to the augmentation data provided by a third party data source;presenting the augmentation information to a wearer associated with the identified products in the field of view.2. The method of wherein the augmentation information is advertising presented to the wearer as visual information in the field of view or as audible information.3. The method of wherein the augmentation information is targeted to the wearer based on the personal information of the wearer.4. The method of wherein the augmentation information is rendered to a wearer when the wearer is gazing at the ...

Подробнее
07-11-2013 дата публикации

INTELLIGENT TRANSLATIONS IN PERSONAL SEE THROUGH DISPLAY

Номер: US20130293577A1
Принадлежит:

A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful. 1. A method for presenting a translation of a real world expression to a wearer of a see through head mounted display apparatus , comprising:determining a gaze of a wearer looking through the see-through display of the apparatus;determining a three dimensional location of one or more objects in the field of view of the user through the see-through display, the determining of the three dimensional location of the object is performed using the one or more sensors;receiving a selection of data for translation in the field of view of the wearer by reference to the gaze of the wearer at one of the objects;analyzing the data for translation to provide input data; translating the input data into a translated form for the user; andrendering the translation in an audio or visual format in the see through head mounted display.2. The method of further including accessing a user profile for user information bearing on accuracy of the translation and wherein translating the data comprises evaluating input data for translation against the user information.3. The method of wherein the step of translating comprises converting the input data from a first language to a second language.4. The method of wherein the step of translating comprises providing supplemental knowledge for the input data on a subject matter identified in the input data.5. The method of wherein the supplemental ...

Подробнее
05-12-2013 дата публикации

NAVIGATING CONTENT IN AN HMD USING A PHYSICAL OBJECT

Номер: US20130321255A1
Принадлежит:

Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD. 1. A method for navigating content , comprising:receiving input that specifies what content is to be navigated by a user wearing a see-through, near-eye, mixed reality display;identifying markers in a physical object using a camera as the user manipulates the physical object;determining what portions of the content are associated with the identified markers; andpresenting images representing the portions of the content in the see-through, near-eye, mixed reality display device.2. The method of claim 1 , further comprising:presenting a navigation aid to the user in the see-through, near-eye, mixed reality display as the user manipulates the physical object.3. The method of claim 2 , wherein the physical object includes a sequence of ordered pages claim 2 , the markers are on the pages claim 2 , the presenting a navigation aid to the user in the see-through claim 2 , near-eye claim 2 , mixed reality display device as the user manipulates the physical object includes:presenting a table of contents in the see-through, near-eye, mixed reality display device, the table of contents defines at which page of the ordered sequence of pages various portions of the content can be accessed.4. The method of claim 2 , wherein the physical object includes a sequence of ordered pages claim 2 , the markers are on the pages claim 2 , the presenting a navigation aid to the user ...

Подробнее
05-12-2013 дата публикации

AUGMENTED BOOKS IN A MIXED REALITY ENVIRONMENT

Номер: US20130321390A1
Принадлежит:

A system and method are disclosed for augmenting a reading experience in a mixed reality environment. In response to predefined verbal or physical gestures, the mixed reality system is able to answer a user's questions or provide additional information relating to what the user is reading. Responses may be displayed to the user on virtual display slates in a border or around the reading material without obscuring text or interfering with the user's reading experience. 1. A system for presenting a mixed reality experience to one or more users , the system comprising:a display device for a user of the one or more users, the display device including a display unit for displaying a virtual image to the user of the display device; anda computing system operatively coupled to the display device, the computing system generating the virtual image for display on the display device, the virtual image added in relation to reading material the user is reading or an image the user is viewing.2. The system of claim 1 , the computing system comprises at least one of a hub computing system and one or more processing units.3. The system of claim 1 , the virtual image added in relation to reading material including a response to a query from the user relating to the reading material.4. The system of claim 3 , wherein the response is one of text claim 3 , an image and a video.5. The system of claim 1 , the virtual image added in relation to reading material including an annotation with user-defined content.6. The system of claim 5 , wherein the annotation is one of text claim 5 , an image claim 5 , a video claim 5 , a data file claim 5 , and audio file and an executable software application file.7. The system of claim 1 , wherein the reading material or image is one of a tangible reading material or image claim 1 , an electronic reading material or image claim 1 , or a virtual reading material or image.8. A method of presenting a mixed reality experience to a user viewing a reading ...

Подробнее
05-12-2013 дата публикации

POSITION RELATIVE HOLOGRAM INTERACTIONS

Номер: US20130326364A1
Принадлежит:

A system and method are disclosed for positioning and sizing virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. 1. A system for presenting a mixed reality experience to one or more users , the system comprising:one or more display devices for the one or more users, each display device including a display unit for displaying a virtual image to the user of the display device; anda computing system operatively coupled to the one or more display devices, the computing system generating the virtual image for display on the one or more display devices, the computing system displaying the virtual image to a user of the one or more users at positions where the virtual object remains accessible to the user for interaction with the virtual object by the user as the user's head position changes.2. The system of claim 1 , the computing system comprises at least one of a hub computing system and one or more processing units.3. The system of claim 1 , the computing system displays the virtual object at a fixed distance from the user within the user's field of view as the user's head position changes.4. The system of claim 1 , the computing system displays the virtual object at a fixed rotational orientation with respect to the user within the user's field of view as the user's head position changes.5. The system of claim 1 , the virtual object is displayed at a fixed rotational orientation with respect to the user's face.6. The system of claim 1 , the virtual object is displayed at a fixed rotational orientation with respect to the user's eyes.7. The system of claim 1 , wherein the virtual object remains accessible to the user upon the user selecting the virtual object for interaction with the virtual object.8. The system of claim 7 , wherein the virtual object is selected by the user performing a gesture with the user's hands claim 7 , body or eyes.9. The system of claim 1 , wherein the ...

Подробнее
12-12-2013 дата публикации

MULTIPLE SENSOR GESTURE RECOGNITION

Номер: US20130328763A1
Принадлежит:

Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system. 1. A method for recognizing a particular gesture , comprising:receiving a plurality of sensor inputs;acquiring one or more multi-sensor gesture filters associated with the particular gesture, the one or more multi-sensor gesture filters include a particular filter;detecting that a new sensor input is available for recognizing the particular gesture;adding the new sensor input to the plurality of sensor inputs;updating the one or more multi-sensor gesture filters in response to the new sensor input;generating a plurality of single-sensor gesture recognition results based on the plurality of sensor inputs, the generating a plurality of single-sensor gesture recognition results is performed subsequent to the adding the new sensor input to the plurality of sensor inputs;determining that the particular filter is satisfied based on the plurality of single-sensor gesture recognition results; andexecuting a command on a computing system in response to the determining that ...

Подробнее
12-12-2013 дата публикации

OBJECT FOCUS IN A MIXED REALITY ENVIRONMENT

Номер: US20130328925A1
Принадлежит:

A system and method are disclosed for interpreting user focus on virtual objects in a mixed reality environment. Using inference, express gestures and heuristic rules, the present system determines which of the virtual objects the user is likely focused on and interacting with. At that point, the present system may emphasize the selected virtual object over other virtual objects, and interact with the selected virtual object in a variety of ways. 1. A system for presenting a mixed reality experience to one or more users , the system comprising:a display device for a user, the display device including a display unit for displaying one or more virtual objects to the user of the display device; anda computing system operatively coupled to the display device, the computing system generating the one or more virtual objects for display on the display device, the computing system determining selection of a virtual object from the one or more virtual objects by inferring interaction of the user with the virtual object based on at least one of determining a position of the user's head with respect to the virtual object, determining a position of the user's eyes with respect to the virtual image, determining a position of the user's hand with respect to the virtual object, and determining a movement of the user's hand with respect to the virtual object.2. The system of claim 1 , the computing system comprises at least one of a hub computing system and one or more processing units.3. The system of claim 1 , the computing system determining whether the user's hand is pointing at the virtual object in basing the inference on a position of the user's hand with respect to the virtual object.4. The system of claim 1 , the computing system using a vector straight out from the user's face in basing the inference on determining a position of the user's head with respect to the virtual object.5. The system of claim 1 , the computing system giving greater weight to a position of the ...

Подробнее
12-12-2013 дата публикации

AUGMENTED REALITY PLAYSPACES WITH ADAPTIVE GAME RULES

Номер: US20130328927A1
Принадлежит:

A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment. 1. A method for generating an augmented reality environment , comprising:determining one or more environmental requirements associated with a particular computing application;generating one or more virtual objects associated with the particular computing application;identifying one or more environmental features within a first real-world environment;determining if the one or more environmental requirements are not satisfied based on the one or more environmental features;adjusting the one or more virtual objects such that a particular degree of difficulty of the particular computing application is achieved in response to the determining if the one or more environmental requirements are not satisfied; anddisplaying on a mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the first real-world environment.2. The method of claim 1 , wherein:the one or more virtual objects include stationary virtual obstacles and ...

Подробнее
19-12-2013 дата публикации

DEPTH OF FIELD CONTROL FOR SEE-THRU DISPLAY

Номер: US20130335404A1
Принадлежит:

One embodiment provides a method for controlling a virtual depth of field perceived by a wearer of a see-thru display device. The method includes estimating the ocular depth of field of the wearer and projecting virtual imagery with a specified amount of blur. The amount of blur is determined as a function of the ocular depth of field. Another embodiment provides a method for controlling an ocular depth of field of a wearer of a see-thru display device. This method includes computing a target value for the depth of field and increasing the pixel brightness of the virtual imagery presented to the wearer. The increase in pixel brightness contracts the wearer's pupils and thereby deepens the depth of field to the target value. 1. A method for controlling an ocular depth of field of a wearer of a see-thru display device , the method comprising:computing a target value for the depth of field; andincreasing a pixel brightness of virtual imagery presented to the wearer to contract the wearer's pupils and thereby deepen the depth of field to the target value.2. The method of further comprising decreasing a transmittance of the see-thru display device to real imagery presented to the wearer to dilate the wearer's pupils and thereby contract the depth of field to the target value.3. The method of wherein the pixel brightness is increased and the transmittance decreased by such amounts as to maintain a desired brightness ratio between the real and virtual imagery presented to the wearer.4. The method of further comprising estimating the depth of field claim 1 , wherein the pixel brightness is increased in a closed-loop manner to bring the estimated depth of field to the desired value.5. The method of further comprising locating a focal plane and/or focal point of the wearer claim 1 , wherein the pixel brightness is increased predominately at the focal plane and/or focal point to deepen the depth of field with reduced power consumption in the see-thru display device.6. A method ...

Подробнее
19-12-2013 дата публикации

VIRTUAL OBJECT GENERATION WITHIN A VIRTUAL ENVIRONMENT

Номер: US20130335405A1
Принадлежит:

A system and method are disclosed for building and experiencing three-dimensional virtual objects from within a virtual environment in which they will be viewed upon completion. A virtual object may be created, edited and animated using a natural user interface while the object is displayed to the user in a three-dimensional virtual environment. 1. A system for presenting a virtual environment to one or more users , the virtual environment being coextensive with a real-world space , the system comprising:a display device for a user, the display device including a display unit for displaying one or more virtual objects in the virtual environment to the user of the display device; anda computing system operatively coupled to the display device, the computing system generating the one or more virtual objects in the virtual environment based on input from the user, the one or more virtual objects displayed via the display device as the one or more virtual objects are generated in the virtual environment.2. The system of claim 1 , wherein the computing system generates a virtual object by creating the virtual object in the virtual environment in response to gestures from the user indicating the type of virtual object to be created in the virtual environment.3. The system of claim 1 , wherein the computing system generates a virtual object by creating the virtual object in the virtual environment in response to gestures from the user indicating at least one of a position of the virtual object in the virtual environment and a size of the object in the virtual environment.4. The system of claim 3 , wherein the computing system receives gestures indicating at least one of the position and size of the object within the virtual environment by the user performing at least one of the following gestures: i) pulling up the virtual object from a floor of the virtual environment at the position and to the size desired by the user; ii) a throwing motion claim 3 , a trajectory of an ...

Подробнее
19-12-2013 дата публикации

COLOR VISION DEFICIT CORRECTION

Номер: US20130335435A1
Принадлежит:

Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user. 1. In a see-thru display device , a method to improve a color-resolving ability of a user of the see-thru display device based upon a color vision deficiency of the user , the method comprising:constructing virtual imagery to superpose onto real imagery viewable through the see-thru display device, the virtual imagery configured to accentuate a locus of the real imagery of a color poorly distinguishable based upon the color vision deficiency; anddisplaying the virtual imagery such that the virtual imagery is superposed onto the real imagery, in spatial registry with the real imagery, in a field of view of the see-thru display device.2. The method of wherein the virtual imagery is configured to shift the color of the locus.3. The method of wherein the virtual imagery is configured to increase a brightness of the locus.4. The method of wherein the virtual imagery is configured to delineate a perimeter of the locus.5. The method of wherein the virtual imagery is configured to overwrite the locus with text.6. The method of wherein the virtual imagery is configured to overwrite the locus with one or more of a symbol and a segmentation pattern.7. The method of wherein the virtual imagery is configured to write one or more of text and a symbol adjacent the locus.8. The method of further comprising acquiring an image of the real imagery with a front-facing camera of the see-thru display ...

Подробнее
19-12-2013 дата публикации

LOCAL RENDERING OF TEXT IN IMAGE

Номер: US20130335442A1
Принадлежит:

Various embodiments are disclosed that relate to enhancing the display of images comprising text on various computing device displays. For example, one disclosed embodiment provides, on a computing device, a method of displaying an image, the method including receiving from a remote computing device image data representing a non-text portion of the image, receiving from the remote computing device unrendered text data representing a text portion of the image, rendering the unrendered text data based upon local contextual rendering information to form locally rendered text data, compositing the locally rendered text data and the image data to form a composited image, and providing the composited image to a display. 1. On a computing device , a method of displaying an image , the method comprising:receiving from a remote computing device image data representing a non-text portion of the image;receiving from the remote computing device unrendered text data representing a text portion of the image;rendering the unrendered text data based upon local contextual rendering information to form locally rendered text data;compositing the locally rendered text data and the image data to form a composited image; andproviding the composited image to a display.2. The method of claim 1 , wherein the local contextual rendering information comprises information regarding a capability of one or more of the computing device and the display.3. The method of claim 2 , wherein the local contextual rendering information comprises information regarding one or more of a color space and a display technology utilized by the display.4. The method of claim 1 , wherein the local contextual rendering information comprises information regarding a time-dependent context of one or more of the computing device and the display claim 1 , and also comprises a rule set to be applied to the time-dependent context.5. The method of claim 4 , wherein the information regarding the time-dependent context ...

Подробнее
19-12-2013 дата публикации

ENHANCING CAPTURED DATA

Номер: US20130335594A1
Принадлежит: MICROSOFT CORPORATION

Captured data is obtained, including various types of captured or recorded data (e.g., image data, audio data, video data, etc.) and/or metadata describing various aspects of the capture device and/or the manner in which the data is captured. One or more elements of the captured data that can be replaced by one or more substitute elements are determined, the replaceable elements are removed from the captured data, and links to the substitute elements are associated with the captured data. Links to additional elements to enhance the captured data are also associated with the captured data. Enhanced content can subsequently be constructed based on the captured data as well as the links to the substitute elements and additional elements. 1. A method comprising:obtaining captured data regarding an environment;determining, based at least in part on the captured data, one or more additional elements;adding, as associated with the captured data, one or more links to the one or more additional elements; andenabling enhanced content to be constructed using the one or more additional elements and at least part of the captured data.2. A method as recited in claim 1 , further comprising:determining one or more elements of the captured data that can be replaced by one or more substitute elements;removing the one or more elements from the captured data; andadding, as associated with the captured data, links to the one or more substitute elements.3. A method as recited in claim 1 , the captured data comprising an image.4. A method as recited in claim 3 , the one or more additional elements including audio data regarding the environment.5. A method as recited in claim 1 , the captured data comprising audio data.6. A method as recited in claim 5 , the one or more additional elements including image data regarding the environment.7. A method as recited in claim 1 , the captured data comprising metadata describing a geographic location of a device when the captured data was captured ...

Подробнее
26-12-2013 дата публикации

LOW LIGHT SCENE AUGMENTATION

Номер: US20130342568A1
Принадлежит:

Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features. 1. On a computing device comprising a see-through display device , a method comprising:recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object;identifying one or more geometrical features of the physical object; anddisplaying, on the see through display device, an image augmenting the one or more geometrical features.2. The method of claim 1 , wherein recognizing the background scene comprises:receiving image data from the image sensor,detecting one or more feature points in the environment from the image data, andobtaining information regarding a layout of the environment based upon the one or more feature points;wherein the one or more geometrical features are identified from the information regarding the layout.3. The method of claim 2 , further comprising determining a location of the see-through display device within the environment via the feature points.4. The method of claim 2 , wherein obtaining information regarding a layout of the environment comprises obtaining a surface map of the environment.5. The method of claim 2 , wherein identifying the one or more geometrical features comprises identifying claim 2 , from the information regarding the layout of the environment and for each geometrical feature claim 2 , one or more of a discontinuity associated ...

Подробнее
26-12-2013 дата публикации

Control of displayed content in virtual environments

Номер: US20130342572A1
Принадлежит: Individual

A system and method are disclosed for controlling content displayed to a user in a virtual environment. The virtual environment may include virtual controls with which a user may interact using predefined gestures. Interacting with a virtual control may adjust an aspect of the displayed content, including for example one or more of fast forwarding of the content, rewinding of the content, pausing of the content, stopping the content, changing a volume of content, recording the content, changing a brightness of the content, changing a contrast of the content and changing the content from a first still image to a second still image.

Подробнее
13-02-2014 дата публикации

AUGMENTED REALITY DISPLAY OF SCENE BEHIND SURFACE

Номер: US20140043433A1
Принадлежит:

Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display. 1. On a computing device comprising an outward-facing image sensor , a method comprising:acquiring, via the outward-facing image sensor, image data of a first scene;recognizing a surface based on the image data;in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface and a scene located behind a surface contextually related to the surface; anddisplaying the representation via a display device.2. The method of claim 1 , wherein recognizing the surface comprises identifying a location of the computing device based on one or more of location data from a location sensor and the it a from the outward-facing image sensor claim 1 , and recognizing the surface based upon the location of the computing device.3. The method of claim 1 , wherein recognizing the surface comprises recognizing whether the surface is a movable surface or an unmovable surface claim 1 , and displaying the representation only if the surface is a movable surface.4. The method of claim 1 , wherein the second scene is located behind the ...

Подробнее
13-02-2014 дата публикации

OBJECT TRACKING

Номер: US20140044305A1
Принадлежит:

Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, ate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object. 1. A method of operating a mobile computing device , the computing device comprising an image sensor , the method comprising:acquiring image data;identifying an inanimate moveable object in the image data;determining whether the inanimate moveable object is a tracked object;if the inanimate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object;detecting a trigger to provide a notification of the state of the inanimate moveable object; andproviding an output of the notification of the state of the inanimate moveable object.2. The method of claim 1 , further comprising determining whether the inanimate moveable object is a tracked object before storing information regarding the state of the inanimate moveable object.3. The method of claim 2 , further comprising receiving a user input assigning the inanimate moveable object as a tracked object and/or assigning a user-selected importance score to the inanimate moveable object.4. The method of claim 2 , further comprising assigning the inanimate moveable object an importance score based upon user interactions with the inanimate moveable object claim 2 , and designating the inanimate moveable object as a tracked object based upon the importance score meeting a threshold importance score.5. The method ...

Подробнее
20-02-2014 дата публикации

MIXED REALITY HOLOGRAPHIC OBJECT DEVELOPMENT

Номер: US20140049559A1
Принадлежит:

Systems and related methods for presenting a holographic object that self-adapts to a mixed reality environment are provided. In one example, a holographic object presentation program captures physical environment data from a destination physical environment and creates a model of the environment including physical objects having associated properties. The program identifies a holographic object for display on a display of a display device, the holographic object including one or more rules linking a detected environmental condition and/or properties of the physical objects with a display mode of the holographic object. The program applies the one or more rules to select the display mode for the holographic object based on the detected environmental condition and/or the properties of the physical objects. 1. A self-adapting holographic object presentation system for presenting a holographic object that self-adapts to a mixed reality environment including a destination physical environment and a virtual environment , the self-adapting holographic object presentation system comprising:a display device including an associated processor and memory; capture physical environment data from the destination physical environment using one or more sensors;', 'create a model of the destination physical environment based on the captured physical environment data, the model including identified physical objects in the destination physical environment having associated physical object properties; and', 'identify a holographic object for display on the display device, wherein the holographic object includes one or more rules linking a detected environmental condition and/or the physical object properties of the identified physical objects with a display mode of the holographic object., 'a holographic object presentation program executed by the processor using portions of the memory, the holographic object presentation program configured to2. The self-adapting holographic object ...

Подробнее
08-01-2015 дата публикации

Gesture recognizer system architecture

Номер: US20150009135A1
Принадлежит: Microsoft Technology Licensing LLC

Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data.

Подробнее
11-01-2018 дата публикации

MIXED REALITY INTERACTIONS

Номер: US20180012412A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. 1. A mixed reality interaction system comprising:a head-mounted display device including a display system, and a camera; and identify a physical object in a mixed reality environment based on an image captured by the camera;', 'determine an interaction context for the identified physical object based on one or more aspects of the mixed reality environment;', 'programmatically select an interaction mode for the identified physical object based on the interaction context and a stored profile for the physical object;', 'interpret a user input directed at the physical object to correspond to a virtual action based on the selected interaction mode;', 'execute the virtual action to modify an appearance of a virtual object associated with the physical object; and', 'display the virtual object via the head-mounted display device with the modified appearance., 'a processor configured to2. The mixed reality interaction system of claim 1 , wherein the one or more aspects of the mixed reality environment include temporal data.3. The mixed reality interaction system of claim 1 , wherein ...

Подробнее
19-02-2015 дата публикации

EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20150049114A1
Принадлежит:

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. 1. A method for presenting a personalized experience using a personal see-through A/V apparatus , comprising:accessing a first exercise routine for a first person;accessing data for a second person for a second exercise routine different from the first exercise routine;estimating a performance of how the second person would perform the first exercise routine based on the data; andpresenting a virtual image of someone performing the first exercise routine based on the estimated performance so that the first person can see the virtual image inserted into a real scene viewed through the personal see-through A/V apparatus as the first person performs the first exercise routine.2. The method of claim 1 , wherein:the estimated performance is based on past performance of the second exercise routine.3. The method of claim 1 , wherein:the estimated performance is based on a live performance of the second exercise routine.4. The method of claim 1 , wherein the presenting a virtual image of someone performing the first exercise routine based on the estimated performance includes:presenting an avatar of the second person that integrates the second person into an environment of the first person.5. The method of claim 4 , wherein the second person is exercising at a remote location from the first person.6. The method of claim 1 , wherein the accessing data for the second person for the second exercise routine different from the first exercise routine includes:accessing real time exercise data for the second person at a location that is remote from the personal see-through A/V apparatus.7. The method of claim 6 , ...

Подробнее
08-05-2014 дата публикации

CROSS-PLATFORM AUGMENTED REALITY EXPERIENCE

Номер: US20140128161A1
Принадлежит:

A plurality of game sessions are hosted at a server system. A first computing device of a first user is joined to a first multiplayer gaming session, the first computing device including a see-through display. Augmentation information is sent to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user. A second computing device of a second user is joined to the first multiplayer gaming session. Experience information is sent to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user. 1. A method for hosting a plurality of game sessions at a server system , the method comprising:joining a first computing device of a first user to a first multiplayer gaming session, the first computing device including a see-through display;sending augmentation information to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user;joining a second computing device of a second user to the first multiplayer gaming session; andsending experience information to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user.2. The method of claim 1 , wherein the cross-platform representation of the augmented reality experience is configured for visual presentation via a display device connected to the second computing device.3. The method of claim 1 , wherein the cross-platform representation of the augmented reality experience is presented to the second user in a first-person mode.4. The method of claim 1 , wherein the cross-platform representation of the augmented reality experience is presented to the second user in a third-person mode.5. The method of claim 1 , wherein the experience information includes aspects of a physical ...

Подробнее
09-03-2017 дата публикации

CHAINING ANIMATIONS

Номер: US20170069125A1
Принадлежит:

In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate. 1. A method for chaining animations , the method comprising:receiving image data that is representative of captured motion;selecting a pre-canned animation; at least a first portion of the captured motion is represented by the pre-canned animation that replaces the first portion of the captured motion; and', 'at least a second portion of the captured motion is represented by an animation that corresponds to the captured motion; and, 'based at least in part on a transition point, generating a chained animation whereinrendering the chained animation, wherein the chained animation comprises a blending of the first and second portions.2. The method of claim 1 , wherein a parameter of the transition point is set based at least in part on a gesture difficulty.3. The method of claim 2 , wherein the rendering the chained animation is triggered in response to determining that the at least one parameter is satisfied.4. The method of claim 1 , wherein selecting the pre-canned animation comprises selecting a pre-canned animation ...

Подробнее
09-03-2017 дата публикации

INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES

Номер: US20170069143A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. 1. An augmented reality computing device comprising:a display system;a logic device; anda storage device comprising instructions executable by the logic device toidentify a virtual object located outside a field of view available for a display of augmented reality information, the virtual object being world-locked and related to a real-world object,display, within the field of view adjacent to a periphery of the field of view, a first marker providing an indication of positional information associated with the virtual object, the first marker comprising a first set of information associated with the real-world object,detecting a change in position of the augmented reality computing device that brings the virtual object into the field of view, anddisplaying a second marker within the field of view comprising a second set of information displayed regarding the real-world object.2. The augmented reality computing device of claim 1 , wherein the instructions are executable to display the first marker based upon a query provided by user input to the computing device.3. The augmented reality computing device of claim 1 , wherein the instructions are further executable to display the first maker based upon a location of the computing device and a location of the real-world object.4. The augmented reality computing device of claim 1 , wherein the instructions are further executable to display the first marker with an appearance that varies based upon a quantity of objects associated with the first marker.5. The augmented reality computing device of claim 1 , wherein the ...

Подробнее
17-03-2016 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20160077785A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A display device , comprising:one or more sensors;a logic device; and receive an input of an identity of a selected real object based on one or more of input received from one or more sensors of the display device and a selection of a location on a map,', 'receive a request to link a user-specified executable virtual object with the selected real object such that the virtual object is executable by a selected user in proximity to the selected real object;', 'link the virtual object with the selected real object; and', 'send information regarding the virtual object and the linked real object to a remote service., 'a storage device holding instructions executable by the logic device to'}2. The display device of claim 1 , wherein the instructions are executable by the logic device to receive the request to link the user-specified executable virtual object with the selected real object by receiving a voice command from the user.3. The display device of claim 1 , wherein the instructions are executable by the logic device to receive the input of the identity of the real object by receiving image data of a background scene from an image sensor and determining which real object from a plurality of real objects in the background scene is the selected real object.4. The display ...

Подробнее
15-03-2018 дата публикации

Constructing augmented reality environment with pre-computed lighting

Номер: US20180075663A1
Принадлежит: Microsoft Technology Licensing LLC

Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

Подробнее
24-03-2016 дата публикации

PROVIDING LOCATION OCCUPANCY ANALYSIS VIA A MIXED REALITY DEVICE

Номер: US20160086382A1
Принадлежит:

The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. 120.-. (canceled)21. A machine-system implemented method of determining that a first among a plurality of areas is not occupied by one or more persons of interest , the method comprising:receiving person selection criteria from a user;automatically determining a current location of the user;automatically identifying a first among a plurality of automatically searchable areas each capable of containing one or more persons and each capable of automated detecting of identities of one or more persons in that area, the identified first area being most proximate to the user;based on use of one or more persons identifying services, automatically determining whether the identified first area contains any persons satisfying the person selection criteria received from the user; andif the one or more persons identifying services fail to indicate presence of at least one person satisfying the person selection criteria within the identified first area, indicating to the user that the first area does not contain any persons satisfying the person selection criteria.22. The method of wherein the used one or more ...

Подробнее
23-04-2015 дата публикации

Isolate Extraneous Motions

Номер: US20150110354A1
Принадлежит:

A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter. 1. A method for applying a filter representing an intended gesture comprising:receiving data captured by a camera, wherein the data is representative of a user's motion in a physical space;predicting the intended gesture from the data;selecting a first portion of the data that is applicable to the intended gesture; andapplying the filter representing the intended gesture to the first portion of the data and determining an output from base information representing the intended gesture, wherein the filter comprises the base information representing the intended gesture.2. The method of claim 1 , further comprising applying a plurality of filters to the data claim 1 , wherein the intended gesture is a gesture corresponding to at least one of the plurality of filters having base information that corresponds to the data.3. The method of claim 2 , further comprising generating a model of the user from the data claim 2 , wherein the model maps to the first portion of the data that is applicable to the intended gesture and comprises a pre-authored animation that represents a second portion of the data that is not applicable to ...

Подробнее
28-04-2016 дата публикации

User controlled real object disappearance in a mixed reality display

Номер: US20160117861A1
Принадлежит: Microsoft Technology Licensing LLC

The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system.

Подробнее
14-05-2015 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20150130689A1
Принадлежит:

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A portable see-through display device , comprising:one or more sensors;a logic subsystem; and receive an input of an identity of a selected real object based on one or more of input received from one or more sensors of the see-through display device and a selection of a location on a map,', 'receive a request to link a user-specified executable virtual object with the selected real object such that the virtual object is executable by a selected user in proximity to the selected real object;, 'a data-holding subsystem holding instructions executable by the logic subsystem to'}link the virtual object with the selected real object; andsend information regarding the virtual object and the linked real object to a remote service.2. The display device of claim 1 , wherein the instructions are executable to receive the request to link the user-specified executable virtual object with the selected real object by receiving a voice command from the user.3. The display device of claim 1 , wherein the instructions are executable to receive the input of the identity of the real object by receiving image data of a background scene from an image sensor and determining which real object from a plurality of real objects in the background scene is the selected real object.4. The display device ...

Подробнее