Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 128. Отображено 128.
08-09-2015 дата публикации

Indicating out-of-view augmented reality images

Номер: US0009129430B2

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

Подробнее
12-05-2020 дата публикации

Ground plane adjustment in a virtual reality environment

Номер: US0010649212B2

An HMD device is configured to vertically adjust the ground plane of a rendered virtual reality environment that has varying elevations to match the flat real world floor so that the device user can move around to navigate and explore the environment and always be properly located on the virtual ground and not be above it or underneath it. Rather than continuously adjust the virtual reality ground plane, which can introduce cognitive dissonance discomfort to the user, when the user is not engaged in some form of locomotion (e.g., walking), the HMD device establishes a threshold radius around the user within which virtual ground plane adjustment is not performed. The user can make movements within the threshold radius without the HMD device shifting the virtual terrain. When the user moves past the threshold radius, the device will perform an adjustment as needed to match the ground plane of the virtual reality environment to the real world floor.

Подробнее
28-01-2016 дата публикации

MOUSE SHARING BETWEEN A DESKTOP AND A VIRTUAL WORLD

Номер: US20160027214A1
Принадлежит:

A mixed-reality head mounted display (HMD) device supports a three dimensional (3D) virtual world application with which a real world desktop displayed on a monitor coupled to a personal computer (PC) may interact and share mouse input. A mouse input server executing on the PC tracks mouse movements on the desktop displayed on a monitor. When movement of the mouse takes it beyond the edge of the monitor screen, the mouse input server takes control of the mouse and stops mouse messages from propagating through the PC's system. The mouse input server communicates over a network connection to a mouse input client exposed by the application to inform the client that the mouse has transitioned to operating in the virtual world and passes mouse messages describing movements and control operation such as button presses. 1. A head mounted display (HMD) device operable by a user in a physical environment , comprising:one or more processors;a see-through display configured for rendering a mixed reality environment to the user, a view position of the user for the rendered mixed reality environment being variable depending at least in part on a pose of the user's head in the physical environment; and rendering the mixed reality environment within a field of view of the HMD device, the mixed reality environment including objects supported in a virtual world and objects supported in a real world,', 'receiving mouse messages over a network connection from a mouse input server running on a remote computing device, the mouse messages describing movements of a mouse that is operatively connected to the computing device, the mouse controlling a cursor displayable in the virtual world and on a monitor in the real world,', 'when movement of the mouse causes the cursor to move beyond a border of the monitor, calculating an initial position of the cursor in the virtual world,', 'using the mouse messages to calculate subsequent positions of the cursor in the virtual world, and', 'rendering ...

Подробнее
12-11-2015 дата публикации

INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES

Номер: US20150325054A1
Принадлежит:

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. 1. An augmented reality computing device comprising:a display system;a logic device; and identify one or more virtual objects located outside a field of view available for a display of augmented reality information, and', 'for each object of the one or more virtual objects, provide to the user an indication of positional information associated with the virtual object within the field of view adjacent to a periphery of the field of view., 'a storage device comprising instructions executable by the logic device to'}2. The augmented reality computing device of claim 1 , wherein the indication of positional information associated with the virtual object includes an indication of a position of the virtual object relative to a position of the user.3. The augmented reality computing device of claim 2 , wherein the instructions are executable to display the indication positional information associated with the virtual object on the display system.4. The augmented reality computing device of claim 2 , wherein the instructions are executable to display the indication positional information associated with the virtual object on the display system within the field of view of a user as a marker indicating a presence of the virtual object and a direction to turn to view the virtual object.5. The augmented reality computing device of claim 4 , wherein the instructions are executable to display the marker as a tendril extending from a location within the field of view to the virtual object.6. The augmented reality computing device of claim 4 , wherein the instructions are further ...

Подробнее
25-07-2013 дата публикации

RECOGNITION OF IMAGE ON EXTERNAL DISPLAY

Номер: US20130187835A1
Принадлежит:

Embodiments are disclosed that relate to the recognition via a see-through display system of an object displayed on an external display device at which a user of the see-through display system is gazing. For example, one embodiment provides a method of operating a see-through display system comprising acquiring an image of an external display screen located in the background scene via an outward facing image sensor, determining via a gaze detection subsystem a location on the external display screen at which the user is gazing, obtaining an identity of an object displayed on the external display screen at the location determined, and performing an action based upon the identity of the object. 1. A method of operating a see-through display system , the see-through display system comprising a see-through display screen , a gaze detection subsystem configured to determine a direction of gaze of each eye of the user , and an outward facing image sensor configured to acquire images of a background scene relative to a user of the see-through display system , the method comprising:acquiring an image of an external display screen located in the background scene via the outward facing image sensor;determining via the gaze detection subsystem a location on the external display screen at which the user is gazing;obtaining an identity of an object displayed on the external display screen at the location determined; andperforming an action based upon the identity of the object.2. The method of claim 1 , wherein obtaining the identity of the object comprises sending image information regarding the object to a remote computing device claim 1 , and receiving the identity from the remote computing device.3. The method of claim 2 , wherein the remote computing device is not in control of the external display screen.4. The method of claim 3 , wherein performing an action comprises displaying contextual information related to the object on the see-through display.5. The method of claim ...

Подробнее
28-01-2016 дата публикации

ANTI-TRIP WHEN IMMERSED IN A VIRTUAL REALITY ENVIRONMENT

Номер: US20160027212A1
Принадлежит: Microsoft Technology Licensing LLC

An HMD device with a see-through display and depth sensing capability is configured to selectively dim or fade out a display of a virtual reality environment to enable a user to see the real world without obstruction by the virtual world when a distance between the user and a real world object is determined to be less than a threshold distance. The current height of the user's head (i.e., the distance from head to ground) may be utilized when performing the dimming/fading so that different threshold distances can be used depending on whether the user is standing or seated.

Подробнее
01-08-2013 дата публикации

HEAD-MOUNTED DISPLAY DEVICE TO MEASURE ATTENTIVENESS

Номер: US20130194389A1
Принадлежит:

A method for assessing a attentiveness to visual stimuli received through a head-mounted display device. The method employs first and second detectors arranged in the head-mounted display device. An ocular state of the wearer of the head-mounted display device is detected with the first detector while the wearer is receiving a visual stimulus. With the second detector, the visual stimulus received by the wearer is detected. The ocular state is then correlated to the wearer's attentiveness to the visual stimulus. 1. A method for assessing attentiveness to visual stimuli , comprising:with a first detector arranged in a head-mounted display device, detecting an ocular state of the wearer of the head-mounted display device while the wearer is receiving a visual stimulus;with a second detector arranged in the head-mounted display device, detecting the visual stimulus; andcorrelating the ocular state to the wearer's attentiveness to the visual stimulus.2. The method of further comprising reporting the wearer's attentiveness to the stimulus.3240. The method of wherein detecting the ocular state includes imaging the wearer's eye or more times per second.4. The method of wherein the visual stimulus includes real imagery in the wearer's field of view.5. The method of wherein the visual stimulus includes virtual imagery added to the wearer's field of view via the head-mounted display device.6. The method of wherein detecting the visual stimulus includes depth sensing.7. The method of wherein the visual stimulus includes imagery mapped to a model accessible by the head-mounted display device claim 1 , and wherein detecting the visual stimulus includes:locating the wearer's line of sight within that model; andsubscribing to the model to identify the imagery that the wearer is sighting.8. The method of wherein the wearer's line of sight is located within the model based partly on positional data from one or more sensors arranged within the head-mounted display device.9. The ...

Подробнее
28-10-2014 дата публикации

Virtual light in augmented reality

Номер: US0008872853B2

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
10-03-2015 дата публикации

Mesh generation from depth images

Номер: US0008976168B2
Принадлежит: Microsoft Technology Licensing, LLC

Systems and methods for mesh generation from depth images are provided. According to one aspect, a method executable by a compression device for sending compressed depth information is provided. The method may comprise, at a compression module executed on the compression device, receiving a depth image of a scene from a depth camera. The depth image may include a matrix of pixels, each pixel in the matrix including a depth value indicating a depth of an object in the scene observed at that pixel. The method may further comprise compressing the depth image into a tree data structure, and sending the tree data structure via a communication path to a rendering device for generating a mesh of the scene at the rendering device.

Подробнее
21-08-2018 дата публикации

Producing and consuming metadata within multi-dimensional data

Номер: US0010055888B2

A computing system and method for producing and consuming metadata within multi-dimensional data is provided. The computing system comprising a see-through display, a sensor system, and a processor configured to: in a recording phase, generate an annotation at a location in a three dimensional environment, receive, via the sensor system, a stream of telemetry data recording movement of a first user in the three dimensional environment, receive a message to be recorded from the first user, and store, in memory as annotation data for the annotation, the stream of telemetry data and the message, and in a playback phase, display a visual indicator of the annotation at the location, receive a selection of the visual indicator by a second user, display a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data, and present the message via the animated simulacrum.

Подробнее
24-02-2015 дата публикации

Executable virtual objects associated with real objects

Номер: US0008963805B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
02-05-2013 дата публикации

MESH GENERATION FROM DEPTH IMAGES

Номер: US20130106852A1
Принадлежит: Individual

Systems and methods for mesh generation from depth images are provided. According to one aspect, a method executable by a compression device for sending compressed depth information is provided. The method may comprise, at a compression module executed on the compression device, receiving a depth image of a scene from a depth camera. The depth image may include a matrix of pixels, each pixel in the matrix including a depth value indicating a depth of an object in the scene observed at that pixel. The method may further comprise compressing the depth image into a tree data structure, and sending the tree data structure via a communication path to a rendering device for generating a mesh of the scene at the rendering device.

Подробнее
26-12-2017 дата публикации

Display resource management

Номер: US0009851787B2

A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode.

Подробнее
02-07-2019 дата публикации

Gaze-based object placement within a virtual reality environment

Номер: US0010338676B2
Принадлежит: MICROSOFT TECHNOLOGY LICENSING, LLC

A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment.

Подробнее
09-05-2017 дата публикации

Use of surface reconstruction data to identify real world floor

Номер: US0009645397B2

In a virtual reality or mixed reality environment, an HMD device is configured to use surface reconstruction data points obtained with a sensor package to identify a location of a floor of a real world environment in which the device operates by sorting the data points by height into respective buckets where each bucket holds a different range of heights. A bucket having the greatest number of data points that are below the height of a user of the HMD device is used to identify the height of the real world floor, for example, by calculating an average of height values of data points in that bucket. A floor for the virtual reality environment may then be aligned to the identified height of the real world floor.

Подробнее
15-08-2017 дата публикации

Mixed reality graduated information delivery

Номер: US0009734636B2

Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a first geo-located data item and provides a first visual information density level for the item to a head-mounted display device. When a spatial information density of geo-located data item information is below a threshold, the program provides a second visual information density level greater than the first level for a second geo-located data item displayed.

Подробнее
23-03-2021 дата публикации

Gaze-based object placement within a virtual reality environment

Номер: US0010955914B2

A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment.

Подробнее
01-12-2015 дата публикации

Executable virtual objects associated with real objects

Номер: US0009201243B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
06-06-2013 дата публикации

AUGMENTED REALITY CAMERA REGISTRATION

Номер: US20130141461A1
Принадлежит:

A system and method executable by a computing device of an augmented reality system for registering a camera in a physical space is provided. The method may include identifying an origin marker in a series of images of a physical space captured by a camera of an augmented reality system, and defining a marker graph having an origin marker node. The method may further include analyzing in real-time the series of images to identify a plurality of expansion markers with locations defined relative to previously imaged markers, and defining corresponding expansion marker nodes in the marker graph. The method may further include calculating a current position of the camera of the augmented reality system in the physical space based on a location of a node in the marker graph corresponding to a most recently imaged marker, relative to the origin marker and any intermediate markers. 1. A method executable by a computing device of an augmented reality system , for registering a camera in a physical space , the method comprising , at a registration module executed on the computing device:receiving a registration image of both an origin marker and an expansion marker, the location of the origin marker in the physical space being stored in a data store of the computing device, the expansion marker having a location in the physical space that is unknown to the computing device;calculating a location in the physical world of the expansion marker relative to the location in the physical world of the origin marker based on a position of each of the expansion marker and the origin marker in the registration image;defining a marker graph having an origin marker node at a location corresponding to the known location of the origin marker in the physical space;adding an expansion marker node to the marker graph as a child node of the origin marker node at a location in the marker graph that corresponds to the relative location of the expansion marker; andcalculating, based the location ...

Подробнее
22-11-2016 дата публикации

Indicating out-of-view augmented reality images

Номер: US0009501873B2

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

Подробнее
05-12-2017 дата публикации

Executable virtual objects associated with real objects

Номер: US0009836889B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
22-06-2017 дата публикации

MIXED REALITY GRADUATED INFORMATION DELIVERY

Номер: US20170178412A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a first geo-located data item and provides a first visual information density level for the item to a head-mounted display device. When a spatial information density of geo-located data item information is below a threshold, the program provides a second visual information density level greater than the first level for a second geo-located data item displayed. 1. A mixed reality system for presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment , the mixed reality system comprising:a head-mounted display device operatively connected to a computing device, the head-mounted display device including a display system for presenting the plurality of visual information density levels within the mixed reality environment;a memory device; and provide a first visual information density level for a first geo-located data item displayed by the head-mounted display device within the mixed reality environment; and', 'when a spatial information density of geo-located data item information in the mixed reality environment is below a predetermined threshold, provide a second visual information density level greater than the first visual information density level for a second geo-located data item displayed by the head-mounted display device, wherein the spatial information density comprises a percentage of a unit area of a display of the head-mounted display device that is occupied by displayed geo-located data item information., 'a graduated information delivery program stored in the memory device and executed by a processor of the computing device, the graduated information delivery program configured to2. The mixed reality ...

Подробнее
29-11-2016 дата публикации

Management of content in a 3D holographic environment

Номер: US0009508195B2

Methods for managing content within an interactive augmented reality environment are described. An augmented reality environment may be provided to an end user of a head-mounted display device (HMD) in which content (e.g., webpages) may be displayed to the end user using one or more curved slates that are positioned on a virtual cylinder that appears body-locked to the end user. The virtual cylinder may be located around the end user with the end user positioned in the middle of the virtual cylinder such that the one or more curved slates appear to be displayed at the same distance from the end user. The position and size of each of the one or more curved slates may be controlled by the end user using head gestures and a virtual pointer projected onto the virtual cylinder.

Подробнее
27-02-2018 дата публикации

Smart placement of virtual objects to stay in the field of view of a head mounted display

Номер: US0009904055B2

An HMD device is configured to check the placement of newly introduced objects in a virtual reality environment such as interactive elements like menus, widgets, and notifications to confirm that the objects are significantly present within the user's field of view. If the intended original placement would locate the object outside the field of view, the HMD device relocates the object so that a portion of the object is viewable at the edge of the HMD display closest to its original placement. Such smart placement of virtual objects enables the user to readily discover new objects when they are introduced into the virtual reality environment, and then interact with the objects within a range of motions and/or head positions that is comfortable to support a more optimal interaction and user experience.

Подробнее
09-10-2018 дата публикации

Three-dimensional mixed-reality viewport

Номер: US0010096168B2

An application running on a computing platform that employs three-dimensional (3D) modeling is extended using a virtual viewport into which 3D holograms are rendered by a mixed-reality head mounted display (HMD) device. The HMD device user can position the viewport to be rendered next to a real world 2D monitor and use it as a natural extension of the 3D modeling application. For example, the user can interact with modeled objects in mixed-reality and move objects between the monitor and the viewport. The 3D modeling application and HMD device are configured to exchange scene data for modeled objects (such as geometry, lighting, rotation, scale) and user interface parameters (such as mouse and keyboard inputs). The HMD device implements head tracking to determine where the user is looking so that user inputs are appropriately directed to the monitor or viewport.

Подробнее
28-01-2016 дата публикации

VIRTUAL REALITY ENVIRONMENT WITH REAL WORLD OBJECTS

Номер: US20160027215A1
Принадлежит:

An HMD device renders a virtual reality environment in which areas of the real world are masked out so that real world objects such as computer monitors, doors, people, faces, and the like appear visible to the device user and no holographic or virtual reality content is rendered over the visible objects. The HMD device includes a sensor package to support application of surface reconstruction techniques to dynamically detect edges and surfaces of the real world objects and keep objects visible on the display as the user changes position or head pose or when the real world objects move or their positions are changed. The HMD device can expose controls to enable the user to select which real world objects are visible in the virtual reality environment. 1. A method performed by a head mounted display (HMD) device that supports rendering of a mixed-reality environment including holographic content from a virtual world and objects from a real world , comprising:obtaining sensor data for one or more objects in the real world included within a physical environment that adjoins a user of the HMD device;reconstructing a geometry for the one or more objects from the sensor data;using the reconstructed geometry, masking out areas of the real world for inclusion in the mixed-reality environment in which masked out areas contain no holographic content from the virtual world; andshowing the mixed-reality environment on a display in the HMD device including portions of the virtual world and the masked out areas of the real world.2. The method of in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and applying surface reconstruction techniques to reconstruct the geometry.3. The method of further including generating depth data using depth-from-stereo imaging analyses.4. The method of further including tracking the user's head in the physical environment using reconstructed geometry of the physical environment to ...

Подробнее
11-04-2017 дата публикации

Mixed reality graduated information delivery

Номер: US0009619939B2

Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a selected geo-located data item and provides a minimum visual information density level for the item to a head-mounted display device. The program receives via the head-mounted display device a user input corresponding to the selected geo-located data item. Based on the input, the program provides an increasing visual information density level for the selected item to the head-mounted display device for display within the mixed reality environment.

Подробнее
08-06-2017 дата публикации

VIRTUAL LIGHT IN AUGMENTED REALITY

Номер: US20170161939A1
Принадлежит: Microsoft Technology Licensing, LLC

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. 1. A method for a head mounted display (HMD) , comprising:observing ambient lighting conditions of a physical environment;assessing a perspective of the HMD;visually augmenting an appearance of the physical environment with a virtual object; andcreating an illusion of a virtual shadow of the virtual object by virtually illuminating a non-virtual-object, non-shadow region bordering the virtual shadow, an appearance of the non-virtual-object, non-shadow region being based on the observed ambient lighting conditions.2. The method of claim 1 , wherein the virtual shadow is rendered with one or more omitted pixels.3. The method of claim 1 , wherein the non-shadow region is displayed with relatively more see-through lighting than the virtual shadow.4. The method of claim 1 , wherein the virtual shadow is positioned in accordance with an ambient lighting model describing ambient lighting conditions of the physical environment.5. A method for a head mounted display (HMD) claim 1 , comprising:observing ambient lighting conditions of a physical environment; andvisually augmenting an appearance of the physical environment with a virtual object and a graphical representation of a non-virtual-object, non-shadow region bordering a virtual shadow of the virtual object, an appearance of the graphical representation of the non-virtual-object, non-shadow region being based on the observed ambient lighting conditions.6. The method of claim 5 , wherein the virtual shadow is positioned in accordance with an ambient lighting model describing ambient lighting conditions of the physical environment.7. The method of claim 6 , wherein the ...

Подробнее
22-06-2017 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20170178410A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A portable display device , comprising:one or more sensors including an image sensor;a logic subsystem; and receive sensor input from the image sensor,', 'determine whether a field of view includes a real object comprising an associated executable virtual object based at least on the sensor input,', 'based at least on a determination that the field of view includes the real object comprising the associated executable virtual object, determine an intent of a user to interact with the associated executable virtual object, and', 'based at least on a determination of the intent of the user to interact with the associated executable virtual object, launch the executable object., 'a data-holding subsystem holding instructions executable by the logic subsystem to'}2. The portable display device of claim 1 , where the executable virtual object is executable to display an image on the portable display device.3. The portable display device of claim 1 , where the executable virtual object is executable to present an audio content item.4. The portable display device of claim 1 , where the executable virtual object is executable to present an invitation to join an activity.5. The portable display device of claim 1 , wherein the image sensor is an outward-facing image sensor claim 1 , and ...

Подробнее
24-01-2017 дата публикации

Virtual light in augmented reality

Номер: US0009551871B2

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
04-01-2018 дата публикации

GROUND PLANE ADJUSTMENT IN A VIRTUAL REALITY ENVIRONMENT

Номер: US20180003982A1
Принадлежит:

An HMD device is configured to vertically adjust the ground plane of a rendered virtual reality environment that has varying elevations to match the flat real world floor so that the device user can move around to navigate and explore the environment and always be properly located on the virtual ground and not be above it or underneath it. Rather than continuously adjust the virtual reality ground plane, which can introduce cognitive dissonance discomfort to the user, when the user is not engaged in some form of locomotion (e.g., walking), the HMD device establishes a threshold radius around the user within which virtual ground plane adjustment is not performed. The user can make movements within the threshold radius without the HMD device shifting the virtual terrain. When the user moves past the threshold radius, the device will perform an adjustment as needed to match the ground plane of the virtual reality environment to the real world floor. 1. A method for rendering a virtual reality environment having a virtual ground plane with variable elevations on a head mounted display (HMD) device located in a physical space having a floor , comprising:rendering, to a user of the HMD device, one or more virtual images in a stream of frames in which the virtual images are rendered with respect to the virtual ground plane;using a depth sensor to generate surface reconstruction data of the physical space;determining a height of the user's head from the floor of the physical space using the surface reconstruction data; andbased on the determined height, dynamically adjusting the virtual ground plane to align with the floor of the physical space on a frame-by-frame basis, or group of frames basis.2. The method of including filtering or smoothing the adjustment of the virtual ground plane to align with the floor.3. The method of including using the surface reconstruction data to determine a location of the user's head in the physical space claim 1 , establishing a threshold ...

Подробнее
22-10-2019 дата публикации

Smart transparency for virtual objects

Номер: US0010451875B2

A head mounted display (HMD) device is configured with a sensor package that enables head tracking to determine the device user's proximity to virtual objects in a mixed reality or virtual reality environment. A fade volume including concentrically-arranged volumetric shells is placed around the user including a near shell that is closest to the user, and a far shell that is farthest from the user. When a virtual object is beyond the far shell, the HMD device renders the object with full opacity (i.e., with no transparency). As the user moves towards a virtual object and it intersects the far shell, its opacity begins to fade out with increasing transparency to reveal the background behind it. The transparency of the virtual object increases as the object gets closer to the near shell and the object becomes fully transparent when the near shell reaches it so that the background becomes fully visible.

Подробнее
29-05-2014 дата публикации

HEAD-MOUNTED DISPLAY RESOURCE MANAGEMENT

Номер: US20140145914A1
Принадлежит:

A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode. 1. A resource management system , comprising:a head-mounted display device configured to be worn by a user and operatively connected to a computing device, the head-mounted display device including a plurality of sensors and a display system for presenting holographic objects, and operate a selected sensor of the plurality of sensors in a default power mode to achieve a selected level of sensor fidelity;', 'receive user-related information from one or more of the plurality of sensors, the user-related information selected from the group consisting of audio information, user gaze information, user location information, user movement information, user image information, and user physiological information;', 'determine whether target information is detected in the user-related information; and', 'where the target information is detected, adjust the selected sensor to operate in a reduced power mode that uses less power than the default power mode, thereby achieving a reduced level of sensor fidelity., 'a resource management program executed by a processor of the computing device, the resource management program configured to2. The resource management system of claim 1 , wherein the target information is selected from the group consisting of context-identifying audio information claim 1 , a user gaze that is fixed on ...

Подробнее
17-09-2013 дата публикации

Environmental-light filter for see-through head-mounted display device

Номер: US0008537075B2
Принадлежит:

An environmental-light filter removably coupled to an optical see-through head-mounted display (HMD) device is disclosed. The environmental-light filter couples to the HMD device between a display component and a real-world scene. Coupling features are provided to allow the filter to be easily and removably attached to the HMD device when desired by a user. The filter increases the primacy of a provided augmented-reality image with respect to a real-world scene and reduces brightness and power consumption requirements for presenting the augmented-reality image. A plurality of filters of varied light transmissivity may be provided from which to select a desired filter based on environmental lighting conditions and user preference. The light transmissivity of the filter may be about 70% light transmissive to substantially or completely opaque.

Подробнее
14-03-2017 дата публикации

Executable virtual objects associated with real objects

Номер: US0009594537B2

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
12-09-2017 дата публикации

Indicating out-of-view augmented reality images

Номер: US0009761057B2

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

Подробнее
05-05-2015 дата публикации

Recognition of image on external display

Номер: US0009024844B2

Embodiments are disclosed that relate to the recognition via a see-through display system of an object displayed on an external display device at which a user of the see-through display system is gazing. For example, one embodiment provides a method of operating a see-through display system comprising acquiring an image of an external display screen located in the background scene via an outward facing image sensor, determining via a gaze detection subsystem a location on the external display screen at which the user is gazing, obtaining an identity of an object displayed on the external display screen at the location determined, and performing an action based upon the identity of the object.

Подробнее
17-09-2019 дата публикации

Gaze-based object placement within a virtual reality environment

Номер: US0010416760B2

A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment.

Подробнее
13-06-2013 дата публикации

Connecting Head Mounted Displays To External Displays And Other Communication Networks

Номер: US20130147686A1
Принадлежит:

An audio and/or visual experience of a see-through head-mounted display (HMD) device, e.g., in the form of glasses, can be moved to target computing device such as a television, cell phone, or computer monitor to allow the user to seamlessly transition the content to the target computing device. For example, when the user enters a room in the home with a television, a movie which is playing on the HMD device can be transferred to the television and begin playing there without substantially interrupting the flow of the movie. The HMD device can inform the television of a network address for accessing the movie, for instance, and provide a current status in the form of a time stamp or packet identifier. Content can also be transferred in the reverse direction, to the HMD device. A transfer can occur based on location, preconfigured settings and user commands. 1. A head-mounted display device , comprising:at least one see-through lens;at least one image projection source associated with the at least one see-through lens; provides an experience comprising at least one of audio and visual content at the head-mounted display device;', 'determines if a condition is met to provide a continuation of at least part of the experience at a target computing device; and', 'if the condition is met, communicates data to the target computing device to allow the target computing device to provide the continuation of the at least part of the experience, the continuation of the at least part of the experience comprises at least one of the audio and the visual content., 'at least one control circuit in communication with the at least one image projection source, the at least one control circuit2. The head-mounted display device of claim 1 , wherein:the at least one control circuit determines that a condition is met to provide a continuation of the visual content at one target computing device and a continuation of the audio content at another computing device.3. The head-mounted display ...

Подробнее
03-11-2016 дата публикации

PRODUCING AND CONSUMING METADATA WITHIN MULTI-DIMENSIONAL DATA

Номер: US20160321841A1
Принадлежит:

A computing system and method for producing and consuming metadata within multi-dimensional data is provided. The computing system comprising a see-through display, a sensor system, and a processor configured to: in a recording phase, generate an annotation at a location in a three dimensional environment, receive, via the sensor system, a stream of telemetry data recording movement of a first user in the three dimensional environment, receive a message to be recorded from the first user, and store, in memory as annotation data for the annotation, the stream of telemetry data and the message, and in a playback phase, display a visual indicator of the annotation at the location, receive a selection of the visual indicator by a second user, display a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data, and present the message via the animated simulacrum. 1. A computing system for producing and consuming metadata within multi-dimensional data , comprising:a see-through display;a sensor system; and [ receive, from a first user, a user command to generate an annotation at a location in a three dimensional environment;', 'receive, via the sensor system, a stream of telemetry data recording movement of the first user in the three dimensional environment;', 'receive a message to be recorded from the first user; and', 'store, in memory as annotation data for the annotation, the stream of telemetry data and the message;, 'in a recording phase, display, to a second user via the see-through display, a visual indicator of the annotation at the location;', 'receive a selection of the visual indicator by the second user;', 'display, to the second user via the see-through display, a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data; and', 'present, to the second user, the message via the animated simulacrum., 'in a playback phase], 'a processor configured to2. The ...

Подробнее
04-06-2019 дата публикации

Anti-trip when immersed in a virtual reality environment

Номер: US0010311638B2

An HMD device with a see-through display and depth sensing capability is configured to selectively dim or fade out a display of a virtual reality environment to enable a user to see the real world without obstruction by the virtual world when a distance between the user and a real world object is determined to be less than a threshold distance. The current height of the user's head (i.e., the distance from head to ground) may be utilized when performing the dimming/fading so that different threshold distances can be used depending on whether the user is standing or seated.

Подробнее
28-01-2016 дата публикации

SMART TRANSPARENCY FOR HOLOGRAPHIC OBJECTS

Номер: US20160025982A1
Принадлежит:

A head mounted display (HMD) device is configured with a sensor package that enables head tracking to determine the device user's proximity to holographic objects in mixed reality or virtual reality environments. A fade volume including concentrically-arranged volumetric shells is placed around the user including a near shell that is closest to the user, and a far shell that is farthest from the user. When a holographic object is beyond the far shell, the HMD device renders the object with full opacity (i.e., with no transparency). As the user moves towards a holographic object and it intersects the far shell, its opacity begins to fade out with increasing transparency to reveal the background behind it. The transparency of the holographic object increases as the object gets closer to the near shell and the object becomes fully transparent when the near shell reaches it so that the background becomes fully visible. 1. A method performed by a head mounted display (HMD) device employed by a user occupying a physical environment , the HMD device supporting rendering of a mixed reality or virtual reality environment that includes holographic objects , comprising:placing a fade volume around the user, the fade volume having a near shell that is proximate to the user and a far shell that is distal to the user;determining a location of a holographic object with respect to the near and far shells by tracking the user's location within the mixed reality or virtual reality environment; andrendering the holographic object with transparency when it is located between the near and far shells, the transparency increasing as the object becomes closer to the near shell.2. The method of further including rendering the holographic object with full opacity when the holographic object is located beyond the far shell of the fade volume.3. The method of further including rendering the holographic object with full transparency when the holographic object intersects the near shell of the ...

Подробнее
28-01-2016 дата публикации

GAZE-BASED OBJECT PLACEMENT WITHIN A VIRTUAL REALITY ENVIRONMENT

Номер: US20160026242A1
Принадлежит:

A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment. 1. A method performed by a head mounted display (HMD) device that supports rendering of a virtual reality environment , comprising:obtaining sensor data describing a real world physical environment adjoining a user of the HMD device;using the sensor data, reconstructing a geometry of the physical environment;tracking the user's head and gaze in the physical environment using the reconstructed geometry to determine a field of view and view position;projecting a gaze ray outward from the view position;identifying an intersection between the projected gaze ray and the virtual reality environment; andplacing a virtual object at the intersection within the current field in response to user input.2. The method of in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and applying surface reconstruction techniques to reconstruct the physical environment geometry.3. The method of further including generating depth data using depth-from-stereo imaging analyses.4. The method of further identifying ...

Подробнее
03-03-2016 дата публикации

MANAGEMENT OF CONTENT IN A 3D HOLOGRAPHIC ENVIRONMENT

Номер: US20160063762A1
Принадлежит:

Methods for managing content within an interactive augmented reality environment are described. An augmented reality environment may be provided to an end user of a head-mounted display device (HMD) in which content (e.g., webpages) may be displayed to the end user using one or more curved slates that are positioned on a virtual cylinder that appears body-locked to the end user. The virtual cylinder may be located around the end user with the end user positioned in the middle of the virtual cylinder such that the one or more curved slates appear to be displayed at the same distance from the end user. The position and size of each of the one or more curved slates may be controlled by the end user using head gestures and a virtual pointer projected onto the virtual cylinder. 1. An electronic device that provides an interactive augmented reality environment , comprising:a see-through display; andone or more processors in communication with the see-through display, the one or more processors acquire a first set of content, the one or more processors determine an axis for a first virtual cylinder that surrounds an end user of the electronic device and determine a radius for the first virtual cylinder, the one or more processors generate one or more images associated with a first curved slate using the first set of content, the one or more processors cause the one or more images to be displayed using the see-through display such that the first curved slate appears to be located within a portion of the first virtual cylinder and the first curved slate appears to be body-locked to the end user of the electronic device.2. The electronic device of claim 1 , wherein:the one or more processors determine a location of the electronic device and determine the radius for the first virtual cylinder based on the location.3. The electronic device of claim 1 , wherein:the one or more processors acquire a second set of content, the one or more processors determine a second radius for a ...

Подробнее
11-06-2013 дата публикации

Mega-mesh sculpting for environments

Номер: US0008462147B2
Автор: Ben Sugden, SUGDEN BEN

A method for sculpting a three-dimensional, graphical environment. The method comprises receiving structure data that structurally defines the graphical environment at a first resolution, and storing composite data based on the structure data received. The composite data includes a first subset defining the graphical environment at the first resolution. The method further comprises exporting section-localized data based on the composite data, the section-localized data defining a section of the graphical environment at least structurally, and receiving refined section-localized data defining a section of the graphical environment at a second resolution finer than the first resolution. The method further comprises augmenting the composite data to include a second subset, which, in combination with the first subset, defines at least the section at the second resolution, according to the refined section-localized data received.

Подробнее
02-01-2018 дата публикации

Three-dimensional mixed-reality viewport

Номер: US0009858720B2

An application running on a computing platform that employs three-dimensional (3D) modeling is extended using a virtual viewport into which 3D holograms are rendered by a mixed-reality head mounted display (HMD) device. The HMD device user can position the viewport to be rendered next to a real world 2D monitor and use it as a natural extension of the 3D modeling application. For example, the user can interact with modeled objects in mixed-reality and move objects between the monitor and the viewport. The 3D modeling application and HMD device are configured to exchange scene data for modeled objects (such as geometry, lighting, rotation, scale) and user interface parameters (such as mouse and keyboard inputs). The HMD device implements head tracking to determine where the user is looking so that user inputs are appropriately directed to the monitor or viewport.

Подробнее
28-01-2016 дата публикации

SMART PLACEMENT OF VIRTUAL OBJECTS TO STAY IN THE FIELD OF VIEW OF A HEAD MOUNTED DISPLAY

Номер: US20160025981A1
Принадлежит:

An HMD device is configured to check the placement of newly introduced objects in a virtual reality environment such as interactive elements like menus, widgets, and notifications to confirm that the objects are significantly present within the user's field of view. If the intended original placement would locate the object outside the field of view, the HMD device relocates the object so that a portion of the object is viewable at the edge of the HMD display closest to its original placement. Such smart placement of virtual objects enables the user to readily discover new objects when they are introduced into the virtual reality environment, and then interact with the objects within a range of motions and/or head positions that is comfortable to support a more optimal interaction and user experience. 1. A method performed by a head mounted display (HMD) device that supports rendering of a virtual reality environment within a field of view , comprising:obtaining sensor data describing a physical space adjoining a user of the HMD device;using the sensor data, reconstructing a geometry of the physical space;tracking the user's head in the physical space using the reconstructed geometry to determine a current field of view;when a new virtual object is introduced into the virtual reality environment, checking its original location; andrelocating the new virtual object if the original location is outside the current field of view so that at least a portion of the new virtual object is within the current field of view when relocated.2. The method of in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and applying surface reconstruction techniques to reconstruct the physical space geometry.3. The method of further including generating depth data using depth-from-stereo imaging analyses.4. The method of further including relocating the new virtual object along an edge of the field of view in a position that is ...

Подробнее
14-05-2015 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20150130689A1
Принадлежит:

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A portable see-through display device , comprising:one or more sensors;a logic subsystem; and receive an input of an identity of a selected real object based on one or more of input received from one or more sensors of the see-through display device and a selection of a location on a map,', 'receive a request to link a user-specified executable virtual object with the selected real object such that the virtual object is executable by a selected user in proximity to the selected real object;, 'a data-holding subsystem holding instructions executable by the logic subsystem to'}link the virtual object with the selected real object; andsend information regarding the virtual object and the linked real object to a remote service.2. The display device of claim 1 , wherein the instructions are executable to receive the request to link the user-specified executable virtual object with the selected real object by receiving a voice command from the user.3. The display device of claim 1 , wherein the instructions are executable to receive the input of the identity of the real object by receiving image data of a background scene from an image sensor and determining which real object from a plurality of real objects in the background scene is the selected real object.4. The display device ...

Подробнее
28-01-2016 дата публикации

MULTI-USER GAZE PROJECTION USING HEAD MOUNTED DISPLAY DEVICES

Номер: US20160027218A1
Принадлежит:

A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a projection of the device user's gaze with a location in a mixed or virtual reality environment. When a projected gaze ray is visibly rendered on other HMD devices (where all the devices are operatively coupled), users of those devices can see what the user is looking at in the environment. In multi-user settings, each HMD device user can see each other's projected gaze rays which can facilitate collaboration in a commonly-shared and experienced mixed or virtual reality environment. The gaze projection can be used much like a finger to point at an object, or to indicate a location on a surface with precision and accuracy. 1. One or more computer readable memories storing computer-executable instructions which , when executed by one or more processors in a local head mounted display (HMD) device located in a physical environment , perform:using data from a sensor package incorporated into the HMD device to dynamically perform head tracking of the user within the physical environment;responsively to the head tracking, determining a field of view of a mixed reality or virtual reality environment that is renderable by the local HMD device, the field of view being variable depending at least in part on a pose of the user's head in the physical environment;receiving data from a remote HMD device including origin and intercept coordinates of a gaze ray that is projected from an origin at a view position of the remote HMD device and terminates at an intercept at a point of intersection between the projected ray and the mixed reality or virtual reality environment; andvisibly rendering a gaze ray on the local HMD device using the received data within the field of view.2. The one or more computer readable memories of further including rendering a cursor within the field of view at the intercept coordinate.3. ...

Подробнее
28-01-2016 дата публикации

THREE-DIMENSIONAL MIXED-REALITY VIEWPORT

Номер: US20160027216A1
Принадлежит:

An application running on a computing platform that employs three-dimensional (3D) modeling is extended using a virtual viewport into which 3D holograms are rendered by a mixed-reality head mounted display (HMD) device. The HMD device user can position the viewport to be rendered next to a real world 2D monitor and use it as a natural extension of the 3D modeling application. For example, the user can interact with modeled objects in mixed-reality and move objects between the monitor and the viewport. The 3D modeling application and HMD device are configured to exchange scene data for modeled objects (such as geometry, lighting, rotation, scale) and user interface parameters (such as mouse and keyboard inputs). The HMD device implements head tracking to determine where the user is looking so that user inputs are appropriately directed to the monitor or viewport. 1. A head mounted display (HMD) device operable by a user in a physical environment , comprising:one or more processors;a sensor package;a display configured for rendering a mixed reality environment to the user, a view position of the user for the rendered mixed-reality environment being variable depending at least in part on a pose of the user's head in the physical environment; and implementing a three-dimensional (3D) virtual viewport on the display,', 'supporting extensibility to a 3D modeling application executing on a remote computing platform, the application supporting a 3D model, and', 'rendering the 3D model as a hologram in the viewport., 'one or more memory devices storing computer-readable instructions which, when executed by the one or more processors, perform a method comprising the steps of2. The HMD of further including a network interface and receiving extensibility data from the remote computing platform over the network interface claim 1 , the extensibility data describing the 3D model and user inputs at the remote computing platform.3. The HMD of further including dynamically updating ...

Подробнее
09-01-2018 дата публикации

Virtual reality environment with real world objects

Номер: US0009865089B2

An HMD device renders a virtual reality environment in which areas of the real world are masked out so that real world objects such as computer monitors, doors, people, faces, and the like appear visible to the device user and no holographic or virtual reality content is rendered over the visible objects. The HMD device includes a sensor package to support application of surface reconstruction techniques to dynamically detect edges and surfaces of the real world objects and keep objects visible on the display as the user changes position or head pose or when the real world objects move or their positions are changed. The HMD device can expose controls to enable the user to select which real world objects are visible in the virtual reality environment.

Подробнее
25-12-2014 дата публикации

INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES

Номер: US20140375683A1
Принадлежит:

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. 1. On an augmented reality computing device comprising a display system , a method for operating a user interface , the method comprising:identifying one or more virtual objects located outside a field of view available for a display of augmented reality information; andfor each object of the one or more virtual objects, providing to the user an indication of positional information associated with the virtual object.2. The method of claim 1 , wherein the indication of positional information associated with the virtual object includes an indication of a position of the virtual object relative to a position of the user.3. The method of claim 2 , wherein the indication of positional information associated with the virtual object is displayed on a see-through display system within the field of view of a user adjacent to a periphery of the field of view.4. The method of claim 2 , wherein the indication of positional information associated with the virtual object is displayed on a see-through display system within the field of view of a user as a marker indicating a presence of the virtual object and a direction to turn to view the virtual object.5. The method of claim 4 , wherein the marker comprises a tendril extending from a location within the field of view to the virtual object.6. The method of claim 4 , further comprising varying a display of the marker based on one or more of a property of the virtual object claim 4 , a distance from the virtual object to the user claim 4 , and an orientation of the virtual object relative to the user.7. The method of claim 6 , ...

Подробнее
28-01-2016 дата публикации

GROUND PLANE ADJUSTMENT IN A VIRTUAL REALITY ENVIRONMENT

Номер: US20160027213A1
Принадлежит:

An HMD device is configured to vertically adjust the ground plane of a rendered virtual reality environment that has varying elevations to match the flat real world floor so that the device user can move around to navigate and explore the environment and always be properly located on the virtual ground and not be above it or underneath it. Rather than continuously adjust the virtual reality ground plane, which can introduce cognitive dissonance discomfort to the user, when the user is not engaged in some form of locomotion (e.g., walking), the HMD device establishes a threshold radius around the user within which virtual ground plane adjustment is not performed. The user can make movements within the threshold radius without the HMD device shifting the virtual terrain. When the user moves past the threshold radius, the device will perform an adjustment as needed to match the ground plane of the virtual reality environment to the real world floor. 1. A method performed by a head mounted display (HMD) device supporting rendering of a virtual reality environment , comprising:obtaining sensor data describing a physical space adjoining a user of the HMD device;using the sensor data, reconstructing a geometry of the physical space including a real world floor;using the reconstructed geometry, determining a location of the user's head in the physical space including a height of the user's head from the real world floor;establishing a threshold radius around the location;maintaining a ground plane for the virtual reality environment when the location is within the threshold radius; andadjusting the ground plane as needed to match the real world floor when the location is outside the threshold radius.2. The method of in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and applying surface reconstruction techniques to reconstruct the physical space geometry.3. The method of further including generating depth data ...

Подробнее
01-08-2013 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20130194164A1
Принадлежит: Individual

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
09-03-2017 дата публикации

INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES

Номер: US20170069143A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object. 1. An augmented reality computing device comprising:a display system;a logic device; anda storage device comprising instructions executable by the logic device toidentify a virtual object located outside a field of view available for a display of augmented reality information, the virtual object being world-locked and related to a real-world object,display, within the field of view adjacent to a periphery of the field of view, a first marker providing an indication of positional information associated with the virtual object, the first marker comprising a first set of information associated with the real-world object,detecting a change in position of the augmented reality computing device that brings the virtual object into the field of view, anddisplaying a second marker within the field of view comprising a second set of information displayed regarding the real-world object.2. The augmented reality computing device of claim 1 , wherein the instructions are executable to display the first marker based upon a query provided by user input to the computing device.3. The augmented reality computing device of claim 1 , wherein the instructions are further executable to display the first maker based upon a location of the computing device and a location of the real-world object.4. The augmented reality computing device of claim 1 , wherein the instructions are further executable to display the first marker with an appearance that varies based upon a quantity of objects associated with the first marker.5. The augmented reality computing device of claim 1 , wherein the ...

Подробнее
13-08-2019 дата публикации

Smart transparency for virtual objects

Номер: US0010379347B2
Принадлежит: Microsoft Technology Licensing, LLC

A head mounted display (HMD) device is configured with a sensor package that enables head tracking to determine the device user's proximity to virtual objects in a mixed reality or virtual reality environment. A fade volume including concentrically-arranged volumetric shells is placed around the user including a near shell that is closest to the user, and a far shell that is farthest from the user. When a virtual object is beyond the far shell, the HMD device renders the object with full opacity (i.e., with no transparency). As the user moves towards a virtual object and it intersects the far shell, its opacity begins to fade out with increasing transparency to reveal the background behind it. The transparency of the virtual object increases as the object gets closer to the near shell and the object becomes fully transparent when the near shell reaches it so that the background becomes fully visible.

Подробнее
28-01-2016 дата публикации

USE OF SURFACE RECONSTRUCTION DATA TO IDENTIFY REAL WORLD FLOOR

Номер: US20160027217A1
Принадлежит:

In a virtual reality or mixed reality environment, an HMD device is configured to use surface reconstruction data points obtained with a sensor package to identify a location of a floor of a real world environment in which the device operates by sorting the data points by height into respective buckets where each bucket holds a different range of heights. A bucket having the greatest number of data points that are below the height of a user of the HMD device is used to identify the height of the real world floor, for example, by calculating an average of height values of data points in that bucket. A floor for the virtual reality environment may then be aligned to the identified height of the real world floor. 1. A method performed by a head mounted display (HMD) device to identify a height of a real world floor , the HMD device supporting rendering of a virtual or mixed reality environment , the method comprising:obtaining surface reconstruction data associated with a real world environment adjoining a user of the HMD device;classifying surface reconstruction data points by height;sorting the classified data points into respective buckets;selecting a bucket having a greatest number of data points that are below a height of the user; andidentifying a height of the real world floor relative to the user based on the data points in the selected bucket.2. The method of in which the surface reconstruction data includes depth data and further including generating the surface reconstruction data using a depth sensor and applying surface reconstruction techniques to reconstruct the real world environment geometry.3. The method of further including generating depth data using one or more depth-from-stereo imaging analyses.4. The method of further including aligning a height of a virtual world floor with the height of the real world floor.5. The method of further including aligning a virtual world object to the real world floor.6. The method of in which the surface ...

Подробнее
20-12-2016 дата публикации

Augmented reality camera registration

Номер: US0009524436B2

A system and method executable by a computing device of an augmented reality system for registering a camera in a physical space is provided. The method may include identifying an origin marker in a series of images of a physical space captured by a camera of an augmented reality system, and defining a marker graph having an origin marker node. The method may further include analyzing in real-time the series of images to identify a plurality of expansion markers with locations defined relative to previously imaged markers, and defining corresponding expansion marker nodes in the marker graph. The method may further include calculating a current position of the camera of the augmented reality system in the physical space based on a location of a node in the marker graph corresponding to a most recently imaged marker, relative to the origin marker and any intermediate markers.

Подробнее
06-06-2013 дата публикации

VIRTUAL LIGHT IN AUGMENTED REALITY

Номер: US20130141434A1
Принадлежит:

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. 1. A method for a computing device , comprising:receiving optical sensor information output by an optical sensor system observing a physical environment, the optical sensor system forming a sensory component of a head-mounted display system;receiving position sensor information output by a position sensor system indicating a perspective of the optical sensor system within the physical environment, the position sensor system forming another sensory component of the head-mounted display system;creating an ambient lighting model from the optical sensor information and the position sensor information, the ambient lighting model describing ambient lighting conditions of the physical environment;modeling the physical environment from the optical sensor information and the position sensor information to create a virtual environment;applying the ambient lighting model to the virtual environment including an added virtual object not present in the physical environment to obtain an illuminated virtual object; andrendering a graphical representation of the illuminated virtual object for presentation via a see-through display of the head-mounted display system, the see-through display configured to visually augment an appearance of the physical environment to a user viewing the physical environment through the see-through display.2. The method of claim 1 , further comprising:applying the ambient lighting model to the virtual environment to obtain a virtual shadow of the illuminated virtual object projected on a virtual surface within the virtual environment; andrendering a graphical representation of a non-shadow region of the ...

Подробнее
17-03-2016 дата публикации

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS

Номер: US20160077785A1
Принадлежит: Microsoft Technology Licensing, LLC

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. 1. A display device , comprising:one or more sensors;a logic device; and receive an input of an identity of a selected real object based on one or more of input received from one or more sensors of the display device and a selection of a location on a map,', 'receive a request to link a user-specified executable virtual object with the selected real object such that the virtual object is executable by a selected user in proximity to the selected real object;', 'link the virtual object with the selected real object; and', 'send information regarding the virtual object and the linked real object to a remote service., 'a storage device holding instructions executable by the logic device to'}2. The display device of claim 1 , wherein the instructions are executable by the logic device to receive the request to link the user-specified executable virtual object with the selected real object by receiving a voice command from the user.3. The display device of claim 1 , wherein the instructions are executable by the logic device to receive the input of the identity of the real object by receiving image data of a background scene from an image sensor and determining which real object from a plurality of real objects in the background scene is the selected real object.4. The display ...

Подробнее
25-09-2018 дата публикации

Virtual light in augmented reality

Номер: US0010083540B2

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
01-08-2013 дата публикации

COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY

Номер: US20130194304A1
Принадлежит: Individual

A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.

Подробнее
17-03-2011 дата публикации

MEGA-MESH SCULPTING FOR ENVIRONMENTS

Номер: US20110065506A1
Автор: Ben Sugden
Принадлежит: MICROSOFT CORPORATION

A method for sculpting a three-dimensional, graphical environment. The method comprises receiving structure data that structurally defines the graphical environment at a first resolution, and storing composite data based on the structure data received. The composite data includes a first subset defining the graphical environment at the first resolution. The method further comprises exporting section-localized data based on the composite data, the section-localized data defining a section of the graphical environment at least structurally, and receiving refined section-localized data defining a section of the graphical environment at a second resolution finer than the first resolution. The method further comprises augmenting the composite data to include a second subset, which, in combination with the first subset, defines at least the section at the second resolution, according to the refined section-localized data received.

Подробнее
27-12-2012 дата публикации

ENVIRONMENTAL-LIGHT FILTER FOR SEE-THROUGH HEAD-MOUNTED DISPLAY DEVICE

Номер: US20120326948A1
Принадлежит: MICROSOFT CORPORATION

An environmental-light filter removably coupled to an optical see-through head-mounted display (HMD) device is disclosed. The environmental-light filter couples to the HMD device between a display component and a real-world scene. Coupling features are provided to allow the filter to be easily and removably attached to the HMD device when desired by a user. The filter increases the primacy of a provided augmented-reality image with respect to a real-world scene and reduces brightness and power consumption requirements for presenting the augmented-reality image. A plurality of filters of varied light transmissivity may be provided from which to select a desired filter based on environmental lighting conditions and user preference. The light transmissivity of the filter may be about 70% light transmissive to substantially or completely opaque. 1. An environmental-light filter lens for a head-mounted display device , comprising:a filter lens configured to at least partially filter environmental light received by a user's eye, and removeably coupled to a head-mounted display device to cause an augmented-reality image to appear less transparent than when the filter lens is not coupled to the head-mounted device, the head mounted device including a see-through lens extending between the user's eye and a real-world scene when the head-mounted device is worn by the user, a display component, and an augmented-reality emitter which emits light to the user's eye using the display component to provide the augmented reality image.2. The environmental-light filter lens of claim 1 , wherein the head-mounted display device includes a frame and the filter lens includes one or more features configured to couple to the frame.3. The environmental-light filter lens of claim 2 , wherein the features include one or more of clips claim 2 , clasps claim 2 , hooks claim 2 , tabs claim 2 , flanges claim 2 , latches claim 2 , or lugs for removeably coupling the filter lens to the frame.4. The ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20130083003A1
Принадлежит:

The technology described herein incudes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. 1. A method for presenting a personalized experience using a personal A/V apparatus , comprising:automatically determining a three dimensional location of the personal A/V apparatus, the personal A/V apparatus includes one or more sensors and a see-through display;automatically determining an orientation of the personal A/V apparatus;automatically determining a gaze of a user looking through the see-through display of the personal A/V apparatus;automatically determining a three dimensional location of a movable object in the field of view of the user through the see-through display, the determining of the three dimensional location of the movable object is performed using the one or more sensors;transmitting the three dimensional location of the personal A/V apparatus, the orientation, the gaze and the three dimensional location of the movable object to a server system;accessing weather data at the server system and automatically determining the effects of weather on the movement of the movable object;accessing course data at the server system;accessing the user's profile at the server system, the user's profile including information about the user's skill and past performance;automatically determining a recommend action on the movable object base on the three dimensional location of the movable object, the weather data and the course data;automatically adjusting the recommendation based on the user's skill and past performance;transmitting the adjusted recommendation to the personal A/V apparatus; anddisplaying the adjusted recommendation in the see-through display of the personal A/V apparatus.2. The method of claim 1 , further comprising:automatically tracking the movable object after the user ...

Подробнее
04-04-2013 дата публикации

CHANGING EXPERIENCE USING PERSONAL A/V SYSTEM

Номер: US20130083007A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user within a particular area;acquiring a user profile associated with the user;determining an enhancement package based on the user profile, the enhancement package includes one or more virtual objects that have not been previously viewed by the user;determining that the user is in a particular physiological state;adapting the one or more virtual objects based on the particular physiological state; anddisplaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular area.2. The method of claim 1 , further comprising:receiving and storing feedback from the user regarding the enhancement package, the user profile is updated to reflect the feedback from the user.3. The method of claim 1 , wherein:the adapting the one or more virtual objects includes substituting the one or more virtual objects with one or more different ...

Подробнее
04-04-2013 дата публикации

ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM

Номер: US20130083008A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user within a particular area;acquiring a user profile associated with the user;determining an enhancement package based on the user profile, the enhancement package includes one or more virtual objects that have not been previously viewed by the user;determining that the user is in a particular physiological state;adapting the one or more virtual objects based on the particular physiological state; anddisplaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular area.2. The method of claim 1 , further comprising:receiving and storing feedback from the user regarding the enhancement package, the user profile is updated to reflect the feedback from the user.3. The method of claim 1 , wherein:the adapting the one or more virtual objects includes substituting the one or more virtual objects with one or more different ...

Подробнее
04-04-2013 дата публикации

EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20130083009A1
Принадлежит:

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. 1. A method for presenting a personalized experience using a personal see-through A/V apparatus , comprising:accessing a location of the personal see-through A/V apparatus;automatically determining an exercise routine for a user based on the location; andpresenting a virtual image in the personal see-through A/V apparatus based on the exercise routine.2. The method of claim 1 , wherein:the presenting a virtual image in the personal see-through A/V apparatus includes presenting an image of someone performing the exercise routine based on data for a past performance of the exercise routine so that the user can see the virtual image inserted into a real scene viewed through the personal see-through A/V apparatus as the user performs the exercise routine.3. The method of claim 1 , wherein the presenting a virtual image in the personal see-through A/V apparatus based on the exercise routine includes:augmenting scenery on a route of the exercise routine so that the user can see additional scenery inserted into real scenery viewed through the personal see-through A/V apparatus.4. The method of claim 1 , further comprising recording data for a user wearing the personal see-through A/V apparatus for a period of time in which the user is not exercising claim 1 , wherein:the automatically determining an exercise routine for a user further includes:accessing a fitness goal for the user for the period of time including the time during which the user actions were recorded:determining the exercise routine based on the recorded user actions for the user to meet the fitness goal.5. The method of claim 4 , wherein the ...

Подробнее
04-04-2013 дата публикации

REPRESENTING A LOCATION AT A PREVIOUS TIME PERIOD USING AN AUGMENTED REALITY DISPLAY

Номер: US20130083011A1
Принадлежит:

Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view. 1. A method for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye , augmented reality (AR) display of a personal audiovisual (A/V) apparatus comprising:automatically identifying the personal A/V apparatus is within the physical location based on location data detected by the personal A/V apparatus;automatically identifying one or more objects in a display field of view of the near-eye, augmented reality display based on a three dimensional mapping of objects in the physical location;identifying user input indicating selection of a previous time period; anddisplaying three-dimensional (3D) virtual data associated with the previous time period based on the one or more objects in the display field of view and based on a user perspective associated with the display field of view.2. The method of further comprising:identifying a change in the display field of view; andupdating the displaying of the 3D virtual data associated with the previous time period based on the change in the display field of view.3. The method of further ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM WITH HOLOGRAPHIC OBJECTS

Номер: US20130083018A1
Принадлежит:

A system for generating an augmented reality environment using state-based virtual objects is described. A state-based virtual object may be associated with a plurality of different states. Each state of the plurality of different states may correspond with a unique set of triggering events different from those of any other state. The set of triggering events associated with a particular state may be used to determine when a state change from the particular state is required. In some cases, each state of the plurality of different states may be associated with a different 3-D model or shape. The plurality of different states may be defined using a predetermined and standardized file format that supports state-based virtual objects. In some embodiments, one or more potential state changes from a particular state may be predicted based on one or more triggering probabilities associated with the set of triggering events. 1. A method for generating an augmented reality environment using a mobile device , comprising:acquiring a particular file of a predetermined file format, the particular file includes information associated with one or more virtual objects, the particular file includes state information for each virtual object of the one or more virtual objects, the one or more virtual objects include a first virtual object, the first virtual object is associated with a first state and a second state different from the first state, the first state is associated with one or more triggering events, a first triggering event of the one or more triggering events is associated with the second state;setting the first virtual object into the first state;detecting the first triggering event;setting the first virtual object into the second state in response to the detecting the first triggering event, the setting the first virtual object into the second state includes acquiring one or more new triggering events different from the one or more triggering events; andgenerating and ...

Подробнее
04-04-2013 дата публикации

PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION

Номер: US20130083062A1
Принадлежит:

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. 1. A method for generating an augmented reality environment using a mobile device , comprising:detecting a user of the mobile device within a particular waiting area of an attraction;acquiring virtual object information associated with the attraction, the virtual object information includes one or more virtual objects; andgenerating and displaying on the mobile device one or more images associated with the one or more virtual objects, the one or more images are displayed such that the one or more virtual objects are perceived to exist within the particular waiting area;detecting the user exiting the particular waiting area; anddisabling the one or more virtual objects in response to the detecting the user exiting the particular waiting area.2. The method of claim 1 , further comprising:identifying an age associated with the user, the acquiring virtual object information includes acquiring virtual object information associated with the attraction based on the age of the user.3. The method of claim 2 , further comprising:acquiring an attraction placement test, the attraction ...

Подробнее
04-04-2013 дата публикации

Service Provision Using Personal Audio/Visual System

Номер: US20130083063A1
Принадлежит:

A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user. 1. A method for use of head-mounted display device worn by a service consumer , the method comprising:receiving image data of a scene from at least one forward-facing camera;communicating the image data of the scene to a computing device of a service provider, the service provider generating data based on the image data of the scene, to assist the service consumer in performing an activity in the scene;receiving the data generated by the service provider; andcontrolling an augmented reality projection system based on the data generated by the service provider to project at least one augmented reality image to the service consumer, to assist the service consumer in performing the activity.2. The method of claim 1 , further comprising:obtaining gaze direction data, the gaze detection data indicating an area of the scene at which the service consumer gazes; andcommunicating the gaze direction data to the computing device of the service provider, to identify, at the computing device of the service provider, the area of the scene at which the service consumer gazes.3. The method of claim 1 , ...

Подробнее
04-04-2013 дата публикации

PERSONAL AUDIO/VISUAL APPARATUS PROVIDING RESOURCE MANAGEMENT

Номер: US20130083064A1
Принадлежит:

Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display. 1. A method for providing resource management using one or more personal audiovisual (A/V) apparatus including a near-eye , augmented reality (AR) display comprising:automatically identifying a resource based on image data of the resource captured by at least one capture device of at least one personal A/V apparatus and object reference data;automatically tracking a three dimensional (3D) space position of the resource in a location identified based on location data detected by the at least one personal A/V apparatus;automatically determining a property of the resource based on the image data of the resource;automatically tracking the property of the resource; andautomatically causing display of image data related to the resource in the near-eye, augmented reality display based on a notification criteria for the property associated with the resource.2. The method of wherein the property associated with the resource comprises at least one of the following:a quantity;an expiration date;a physical damage indicator;a quality control indicator; anda nutritional value.3. The method of further comprising:generating and storing a ...

Подробнее
04-04-2013 дата публикации

VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS

Номер: US20130083173A1
Принадлежит:

Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. 1. A method for providing a virtual spectator experience of an event for viewing with a near-eye , augmented reality display of a personal audiovisual (A/V) apparatus comprising:receiving in real time one or more positions of one or more event objects participating in the event occurring at a first location remote from a second location;mapping the one or more positions of the one or more event objects in the first 3D coordinate system for the first location to a second 3D coordinate system for a second location remote from the first location;determining a display field of view of a near-eye, augmented reality display of a personal A/V apparatus being worn by a user at the second location; andsending in real time 3D virtual data representing the one or more event objects which are within the display field of view to the personal A/V apparatus at the second location.2. The method of wherein the near-eye claim 1 , augmented reality display is a near-eye claim 1 , see-through claim 1 , augmented reality display.3. The method of further comprising receiving in real time 3D virtual data of the one or more event objects which include ...

Подробнее
04-04-2013 дата публикации

Sharing Games Using Personal Audio/Visual Apparatus

Номер: US20130084970A1
Принадлежит:

A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space. 1. A method for sharing a game , comprising:defining a characteristic of a game using a sensor of a first head-mounted display device, the characteristic is defined with respect to a physical environment of a user of the first head-mounted display device; andsharing the game, including the characteristic of the game, with at least a user of a second head-mounted display device via a network.2. The method of claim 1 , wherein:the sensor captures an image of the physical environment; andthe image of the physical environment is used to provide a model of a game space of the game.3. The method of claim 1 , further comprising:identifying one or more other selected users with whom the game is to be shared, the sharing is responsive to the identifying.4. The method of claim 1 , wherein:the characteristic comprises a location of the user in the physical environment, a game space of the game is linked to the location.5. The method of claim 1 , wherein:the characteristic comprises a desired size of a game space of the game;the sensor determines a size of the physical environment of the user; andthe method performed further comprises determining whether the size ...

Подробнее
04-04-2013 дата публикации

Personal Audio/Visual System Providing Allergy Awareness

Номер: US20130085345A1
Принадлежит: Individual

A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user.

Подробнее
18-04-2013 дата публикации

ENHANCING A SPORT USING AN AUGMENTED REALITY DISPLAY

Номер: US20130095924A1
Принадлежит:

Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar. 1. A method for providing a personalized sport performance experience with three dimensional (3D) virtual data being displayed by a near-eye , augmented reality (AR) display of a personal audiovisual (A/V) apparatus comprising:automatically identifying a physical location which the personal A/V apparatus is within based on location data detected by the personal A/V apparatus;automatically identifying one or more 3D space positions of at least one or more sport objects in a sport performance area associated with the physical location based on a three dimensional mapping of objects in the sport performance area;accessing a memory for physical characteristics of a user and skills data for a sport stored for the user in user profile data;determining a physical movement recommendation by a processor for the user performing the sport based on the skills data for the sport, the physical characteristics of the user, and 3D space positions for at least the one or more sport objects; anddisplaying three-dimensional (3D) virtual data depicting one or more visual guides for ...

Подробнее
30-05-2013 дата публикации

HEAD-MOUNTED DISPLAY BASED EDUCATION AND INSTRUCTION

Номер: US20130137076A1
Принадлежит:

Technology disclosed herein provides for use of HMDs in a classroom setting. Technology disclosed herein provides for HMD use for holographic instruction. In one embodiment, the HMD is used for social coaching. User profile information may be used to tailor instruction to a specific user based on known skills, learning styles, and/or characteristics. One or more individuals may be monitored based on sensor data. The sensor data may come from an HMD. The monitoring may be analyzed to determine how to enhance an experience. The experience may be enhanced by presenting an image in at least one head mounted display worn by the one or more individuals. 1. A method comprising:monitoring one or more individuals engaged in an experience, the monitoring is based on sensor data from one or more sensors;analyzing the monitoring to determine how to enhance the experience; andenhancing the experience based on the analyzing, the enhancing includes presenting a signal to at least one see-through head mounted display worn by the one or more individuals.2. The method of claim 1 , wherein the monitoring claim 1 , the analyzing claim 1 , and the enhancing include:detecting an eye gaze of a teacher using at least one of the sensors, the at least one sensor is part of a see-through HMD worn by the teacher;determining which of the one or more individuals the teacher is gazing at;determining information regarding the individual the teacher is gazing at; andproviding the information to a see-through HMD worn by the teacher.3. The method of claim 2 , wherein the monitoring claim 2 , the analyzing claim 2 , and the providing include:collecting biometric information about the individual the teacher is gazing at using at least one of the sensors;determining success, comprehension and/or attention of the individual the teacher is gazing at based on the biometric information; andreporting the success, comprehension and/or the attention of individual the teacher is gazing in the see-through HMD ...

Подробнее
27-06-2013 дата публикации

ENVIRONMENTAL-LIGHT FILTER FOR SEE-THROUGH HEAD-MOUNTED DISPLAY DEVICE

Номер: US20130162505A1
Принадлежит:

An environmental-light filter removably coupled to an optical see-through head-mounted display (HMD) device is disclosed. The environmental-light filter couples to the HMD device between a display component and a real-world scene. Coupling features are provided to allow the filter to be easily and removably attached to the HMD device when desired by a user. The filter increases the primacy of a provided augmented-reality image with respect to a real-world scene and reduces brightness and power consumption requirements for presenting the augmented-reality image. A plurality of filters of varied light transmissivity may be provided from which to select a desired filter based on environmental lighting conditions and user preference. The light transmissivity of the filter may be about 70% light transmissive to substantially or completely opaque. 1. An environmental-light filter lens for a head-mounted display device , comprising:a filter lens configured to at least partially filter environmental light received by a user's eye, and removeably coupled to a head-mounted display device to cause an augmented-reality image to appear less transparent than when the filter lens is not coupled to the head-mounted device, the head mounted device including a see-through lens extending between the user's eye and a real-world scene when the head-mounted device is worn by the user, a display component, and an augmented-reality emitter which emits light to the user's eye using the display component to provide the augmented reality image.2. The environmental-light filter lens of claim 1 , wherein the head-mounted display device includes a frame and the filter lens includes one or more features configured to couple to the frame.3. The environmental-light filter lens of claim 2 , wherein the features include one or more of clips claim 2 , clasps claim 2 , hooks claim 2 , tabs claim 2 , flanges claim 2 , latches claim 2 , or lugs for removeably coupling the filter lens to the frame.4. The ...

Подробнее
31-10-2013 дата публикации

DISPLAYING A COLLISION BETWEEN REAL AND VIRTUAL OBJECTS

Номер: US20130286004A1
Принадлежит:

Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions. 1. A method for displaying a collision between a real object and a virtual object by an augmented reality display device system comprising:identifying a collision between a real object and a virtual object in a display field of view of an, augmented reality display based on a respective three dimensional (3D) space position associated with each object in the display field of view;determining at least one effect on at least one physical property of the real object due to the collision based on one or more physical properties of the real object and physical interaction characteristics for the collision;generating image data of the real object simulating the at least one effect on the at least one physical property of the real object; anddisplaying the image data of the real object registered to the real object.2. The method of claim 1 , the physical interaction characteristics including a velocity of at least one of the real object and the virtual object in the display field of view.3. The method of further comprising:determining at least one effect on at least one physical property of the virtual object due to the collision based on its physical properties and the physical interaction characteristics for the collision;modifying image data of the virtual object for ...

Подробнее
21-11-2013 дата публикации

HOLOGRAPHIC STORY TELLING

Номер: US20130307855A1
Принадлежит:

A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. 1. A method for generating and displaying holographic visual aids associated with a work of literature , comprising:identifying a reading object within a reading distance of a mobile device;acquiring a first virtual object associated with the reading object, the first virtual object is associated with a first triggering event;detecting a portion of the reading object being read by an end user of the mobile device;determining whether the first triggering event has been satisfied based on the detecting a portion of the reading object being read;determining a reading pace associated with the portion of the reading object read by the end user in response to the first triggering event being satisfied;generating a holographic animation associated with the first virtual object based on the reading pace; anddisplaying at the mobile device the holographic animation.2. The method of claim 1 , wherein:the reading object comprises a book; andthe portion of the reading object comprises a sentence from the book.3. The method of claim 1 , wherein:the detecting a portion of the reading object being read includes acquiring one or more images ...

Подробнее
21-11-2013 дата публикации

SYNCHRONIZING VIRTUAL ACTOR'S PERFORMANCES TO A SPEAKER'S VOICE

Номер: US20130307856A1
Принадлежит:

A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. 1. A method for generating and displaying holographic visual aids , comprising:identifying a reading object at a mobile device, the reading object is associated with one or more holographic animations corresponding with one or more phrases viewable from the reading object;detecting at the mobile device that a first phrase of the one or more phrases has been spoken by a particular person;determining a reading pace corresponding with the first phrase;detecting at the mobile device that a portion of a second phrase of the one or more phrases has been spoken by the particular person; anddisplaying at the mobile device a second holographic animation corresponding with the second phrase based on the reading pace.2. The method of claim 1 , wherein:the reading object comprises a book; andthe first phrase comprises a sentence from the book.3. The method of claim 1 , further comprising:detecting a failure of the second phrase being spoken; anddisplaying an idling holographic animation of the one or more holographic animations in response to the failure of the second phrase being spoken.4. The method of claim 1 , wherein:the detecting that a ...

Подробнее
05-12-2013 дата публикации

NAVIGATING CONTENT IN AN HMD USING A PHYSICAL OBJECT

Номер: US20130321255A1
Принадлежит:

Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD. 1. A method for navigating content , comprising:receiving input that specifies what content is to be navigated by a user wearing a see-through, near-eye, mixed reality display;identifying markers in a physical object using a camera as the user manipulates the physical object;determining what portions of the content are associated with the identified markers; andpresenting images representing the portions of the content in the see-through, near-eye, mixed reality display device.2. The method of claim 1 , further comprising:presenting a navigation aid to the user in the see-through, near-eye, mixed reality display as the user manipulates the physical object.3. The method of claim 2 , wherein the physical object includes a sequence of ordered pages claim 2 , the markers are on the pages claim 2 , the presenting a navigation aid to the user in the see-through claim 2 , near-eye claim 2 , mixed reality display device as the user manipulates the physical object includes:presenting a table of contents in the see-through, near-eye, mixed reality display device, the table of contents defines at which page of the ordered sequence of pages various portions of the content can be accessed.4. The method of claim 2 , wherein the physical object includes a sequence of ordered pages claim 2 , the markers are on the pages claim 2 , the presenting a navigation aid to the user ...

Подробнее
05-12-2013 дата публикации

AUGMENTED BOOKS IN A MIXED REALITY ENVIRONMENT

Номер: US20130321390A1
Принадлежит:

A system and method are disclosed for augmenting a reading experience in a mixed reality environment. In response to predefined verbal or physical gestures, the mixed reality system is able to answer a user's questions or provide additional information relating to what the user is reading. Responses may be displayed to the user on virtual display slates in a border or around the reading material without obscuring text or interfering with the user's reading experience. 1. A system for presenting a mixed reality experience to one or more users , the system comprising:a display device for a user of the one or more users, the display device including a display unit for displaying a virtual image to the user of the display device; anda computing system operatively coupled to the display device, the computing system generating the virtual image for display on the display device, the virtual image added in relation to reading material the user is reading or an image the user is viewing.2. The system of claim 1 , the computing system comprises at least one of a hub computing system and one or more processing units.3. The system of claim 1 , the virtual image added in relation to reading material including a response to a query from the user relating to the reading material.4. The system of claim 3 , wherein the response is one of text claim 3 , an image and a video.5. The system of claim 1 , the virtual image added in relation to reading material including an annotation with user-defined content.6. The system of claim 5 , wherein the annotation is one of text claim 5 , an image claim 5 , a video claim 5 , a data file claim 5 , and audio file and an executable software application file.7. The system of claim 1 , wherein the reading material or image is one of a tangible reading material or image claim 1 , an electronic reading material or image claim 1 , or a virtual reading material or image.8. A method of presenting a mixed reality experience to a user viewing a reading ...

Подробнее
05-12-2013 дата публикации

GESTURE BASED REGION IDENTIFICATION FOR HOLOGRAMS

Номер: US20130321462A1
Принадлежит:

Techniques are provided for allowing a user to select a region within virtual imagery, such as a hologram, being presented in an HMD. The user could select the region by using their hands to form a closed loop such that from the perspective of the user, the closed loop corresponds to the region the user wishes to select. The user could select the region by using a prop, such as a picture frame. In response to the selection, the selected region could be presented using a different rendering technique than other regions of the virtual imagery. Various rendering techniques such as zooming, filtering, etc. could be applied to the selected region. The identification of the region by the user could also serve as a selection of an element in that portion of the virtual image. 1. A method comprising:presenting virtual imagery in a see-through, near-eye display device;accessing image data of an environment that coincides with the virtual imagery from the perspective of a user wearing the see-through, near-eye display device;analyzing the image data to identify a region select symbol being made by the user;identifying a region in the image data that corresponds to the region select symbol; anddetermining a portion of the virtual imagery that corresponds to the region in the image data from the perspective of the user.2. The method of claim 1 , further comprising:modifying the virtual imagery in the see-through, near-eye display device based on the portion of the virtual imagery that is determined to correspond to the region in the image data from the perspective of the user.3. The method of claim 2 , wherein the modifying the virtual imagery includes:presenting, in the see-through, near-eye display device, the portion of the virtual imagery that corresponds to the region in the image data from the perspective of the user with a first display property and other portions of the virtual imagery with a second display property.4. The method of claim 1 , further comprising: ...

Подробнее
02-01-2014 дата публикации

MECHANISM TO GIVE HOLOGRAPHIC OBJECTS SALIENCY IN MULTIPLE SPACES

Номер: US20140002442A1
Принадлежит:

A system for allowing a virtual object to interact with other virtual objects across different spaces within an augmented reality (AR) environment and to transition between the different spaces is described. An AR environment may include a plurality of spaces, each comprising a bounded area or volume within the AR environment. In one example, an AR environment may be associated with a three-dimensional world space and a two-dimensional object space corresponding with a page of a book within the AR environment. A virtual object within the AR environment may be assigned to the object space and transition from the two-dimensional object space to the three-dimensional world space upon the detection of a space transition event. In some cases, a dual representation of the virtual object may be used to detect interactions between the virtual object and other virtual objects in both the world space and the object space. 1. A method for generating and displaying one or more virtual objects , comprising:identifying one or more real objects within an environment, the identifying is performed by a mobile device, the environment is associated with a world space, the one or more real objects include a first real object associated with a first space different from the world space;acquiring a virtual object associated with the environment;assigning the virtual object to the first space;detecting a space transition event for the virtual object;assigning the virtual object to the world space in response to the detecting a space transition event;determining a location for the virtual object relative to the world space; anddisplaying at the mobile device the virtual object such that the virtual object is perceived to exist at a point in space corresponding with the location.2. The method of claim 1 , wherein:the first space comprises a two-dimensional space; andthe world space comprises a three-dimensional space.3. The method of claim 1 , further comprising:detecting an interaction ...

Подробнее
02-01-2014 дата публикации

DEEP AUGMENTED REALITY TAGS FOR HEAD MOUNTED DISPLAYS

Номер: US20140002491A1
Принадлежит:

Techniques are provided for rendering, in a see-through, near-eye mixed reality display, a virtual object within a virtual hole, window or cutout. The virtual hole, window or cutout may appear to be within some real world physical object such as a book, table, etc. The virtual object may appear to be just below the surface of the physical object. In a sense, the virtual world could be considered to be a virtual container that provides developers with additional locations for presenting virtual objects. For example, rather than rendering a virtual object, such as a lamp, in a mixed reality display such that appears to sit on top of a real world desk, the virtual object is rendered such that it appears to be located below the surface of the desk. 1. A method comprising:determining a location for a virtual hole to be rendered in a see-through, near-eye, mixed-reality display device with respect to a real world environment;determining what portions of a virtual object within the virtual hole should be visible from the perspective of a user wearing the see-through, near-eye, mixed-reality display device; andrendering the virtual object within the virtual hole in the see-through, near-eye, mixed-reality display device to present the illusion that the virtual object is within the virtual hole.2. The method of claim 1 , wherein the rendering the virtual object within the virtual hole in the see-through claim 1 , near-eye claim 1 , mixed-reality display device to present the illusion that the virtual object is within the virtual hole includes:occluding portions of the virtual object that should not be visible from a perspective of the user wearing the see-through, near-eye, mixed-reality display device looking into the virtual hole.3. The method of claim 1 , further comprising:identifying a tag in the real world environment using sensor data, the determining a location for a virtual hole with respect to a real world environment includes determining a location for the virtual ...

Подробнее
02-01-2014 дата публикации

PROPAGATION OF REAL WORLD PROPERTIES INTO AUGMENTED REALITY IMAGES

Номер: US20140002492A1
Принадлежит:

Techniques are provided for propagating real world properties into mixed reality images in a see-through, near-eye mixed reality display device. A physical property from the real world may be propagated into a virtual image to be rendered in the display device. Thus, the physics depicted in the mixed reality images may be influenced by a physical property in the environment. Therefore, the user wearing the mixed reality display device is provided a better sense that it is mixed reality, as opposed to simply virtual reality. The mixed reality image may be linked to a real world physical object. This physical object can be movable such as a book, paper, cellular telephone, etc. Forces on the physical object may be propagated into the virtual image. 1. A method comprising:determining a physical property based on sensor data;applying the physical property to a virtual image;modifying the virtual image in response to applying the physical property; andrendering the modified virtual image in a see-through, near-eye, mixed-reality display device.2. The method of claim 1 , further comprising:associating the virtual image with a real world object, the determining a physical property based on sensor data includes determining the physical property with respect to the real world object, the applying the physical property to a virtual image includes propagating the physical property with respect to the real world object to the virtual image.3. The method of claim 2 , wherein the associating the virtual image with a real world object includes:linking the virtual image to an element of the real world object.4. The method of claim 2 , wherein the determining the physical property based on sensor data includes determining gravity and movement forces acting on the real world object.5. The method of claim 1 , wherein the virtual image includes a storyline having branches claim 1 , the modifying the virtual image in response to applying the physical property includes:determining which ...

Подробнее
02-01-2014 дата публикации

MULTI-NODE POSTER LOCATION

Номер: US20140002495A1
Принадлежит:

A system for identifying an AR tag and determining a location for a virtual object within an augmented reality environment corresponding with the AR tag is described. In some environments, including those with viewing obstructions, the identity of the AR tag and the location of a corresponding virtual object may be determined by aggregating individual identity and location determinations from a plurality of head-mounted display devices (HMDs). The virtual object may comprise a shared virtual object that is viewable from each of the plurality of HMDs as existing at a shared location within the augmented reality environment. The shared location may comprise a weighted average of individual location determinations from each of the plurality of HMDs. By aggregating and analyzing individual identity and location determinations, a particular HMD of the plurality of HMDs may display a virtual object without having to identify a corresponding AR tag directly. 1. A method for generating and displaying one or more virtual objects , comprising:identifying a particular tag within an environment, the identifying is performed by a first mobile device;determining a first location associated with the particular tag in response to the identifying a particular tag, the determining a first location is performed by the first mobile device;acquiring a second location associated with the particular tag from a second mobile device different from the first mobile device;determining a shared location associated with the particular tag based on the first location and the second location;acquiring a virtual object associated with the particular tag; anddisplaying at the first mobile device the virtual object such that the virtual object is perceived to exist at a point in space corresponding with the shared location.2. The method of claim 1 , wherein:the determining a shared location includes determining a weighted average of the first location and the second location.3. The method of claim 1 ...

Подробнее
02-01-2014 дата публикации

CONSTRAINT BASED INFORMATION INFERENCE

Номер: US20140002496A1
Принадлежит:

A system for identifying an AR tag and determining a location for a virtual object within an augmented reality environment corresponding with the AR tag is described. In some environments, the location of a virtual object corresponding with a particular AR tag may be determined by identifying a predefined object, determining an orientation and a scale of the predefined object relative to a head-mounted display device (HMD) based on a model of the predefined object, and inferring the location of the virtual object based on the orientation and the scale of the predefined object. In some cases, an identification of the particular AR tag corresponding with the virtual object may be acquired by aggregating and analyzing individual identity determinations from a plurality of HMDs within an augmented reality environment. 1. A method for generating and displaying one or more virtual objects , comprising:identifying a particular object within an environment, the identifying is performed by a first mobile device;acquiring a 3D model of the particular object, the particular object is associated with one or more tags;acquiring an identification of a first tag of the one or more tags from a second mobile device different from the first mobile device;determining a virtual object corresponding with the first tag based on the identification of the first tag;determining a first location associated with the virtual object relative to the 3D model; anddisplaying at the first mobile device the virtual object such that the virtual object is perceived to exist at a point in space corresponding with the first location.2. The method of claim 1 , wherein:the acquiring an identification of a first tag includes acquiring a poster index corresponding with the first tag from the second mobile device.3. The method of claim 1 , wherein:the acquiring an identification of a first tag includes aggregating a plurality of individual identity determinations from a plurality of mobile devices within the ...

Подробнее
02-01-2014 дата публикации

CONTEXTUAL AUDIO DUCKING WITH SITUATION AWARE DEVICES

Номер: US20140006026A1
Принадлежит:

A system for generating one or more enhanced audio signals such that one or more sound levels corresponding with sounds received from one or more sources of sound within an environment may be dynamically adjusted based on contextual information is described. The one or more enhanced audio signals may be generated by a head-mounted display device (HMD) worn by an end user within the environment and outputted to earphones associated with the HMD such that the end user may listen to the one or more enhanced audio signals in real-time. In some cases, each of the one or more sources of sound may correspond with a priority level. The priority level may be dynamically assigned depending on whether the end user of the HMD is focusing on a particular source of sound or has specified a predetermined level of importance corresponding with the particular source of sound. 1. A method for adjusting sound levels based on contextual information , comprising:capturing at a mobile device one or more sounds from an environment;identifying one or more sources of sound within the environment based on the one or more sounds, the one or more sources of sound include a first sound source and a second sound source different from the first sound source;assigning a first priority level to the first sound source;assigning a second priority level to the second sound source;determining a first sound level associated with the first sound source;determining a second sound level associated with the second sound source;determining one or more weighting coefficients based on the first priority level and the second priority level;generating one or more enhanced audio signals based on the one or more weighting coefficients; andoutputting the one or more enhanced audio signals from the mobile device.2. The method of claim 1 , wherein:the assigning a first priority level to the first sound source includes determining whether an end user of the mobile device is focusing on the first sound source.3. The ...

Подробнее
05-02-2015 дата публикации

Virtual light in augmented reality

Номер: US20150035832A1
Принадлежит: Microsoft Technology Licensing LLC

A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.

Подробнее
05-02-2015 дата публикации

MIXED REALITY GRADUATED INFORMATION DELIVERY

Номер: US20150035861A1
Принадлежит:

Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a selected geo-located data item and provides a minimum visual information density level for the item to a head-mounted display device. The program receives via the head-mounted display device a user input corresponding to the selected geo-located data item. Based on the input, the program provides an increasing visual information density level for the selected item to the head-mounted display device for display within the mixed reality environment. 1. A mixed reality system for presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment , the visual information density levels comprising a minimum visual information density level and a plurality of increasing visual information density levels , the mixed reality system comprising:a head-mounted display device operatively connected to a computing device, the head-mounted display device including a display system for presenting the plurality of visual information density levels within the mixed reality environment; anda graduated information delivery program executed by a processor of the computing device, the graduated information delivery program configured to:receive information for a selected geo-located data item;provide the minimum visual information density level for the selected geo-located data item to the display system for display by the head-mounted display device within the mixed reality environment;receive via the head-mounted display device a user input corresponding to the selected geo-located data item; andbased on the user input, provide one of the increasing visual information density levels for the selected geo-located data item to the display system ...

Подробнее
19-02-2015 дата публикации

EXERCISING APPLICATIONS FOR PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20150049114A1
Принадлежит:

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present. 1. A method for presenting a personalized experience using a personal see-through A/V apparatus , comprising:accessing a first exercise routine for a first person;accessing data for a second person for a second exercise routine different from the first exercise routine;estimating a performance of how the second person would perform the first exercise routine based on the data; andpresenting a virtual image of someone performing the first exercise routine based on the estimated performance so that the first person can see the virtual image inserted into a real scene viewed through the personal see-through A/V apparatus as the first person performs the first exercise routine.2. The method of claim 1 , wherein:the estimated performance is based on past performance of the second exercise routine.3. The method of claim 1 , wherein:the estimated performance is based on a live performance of the second exercise routine.4. The method of claim 1 , wherein the presenting a virtual image of someone performing the first exercise routine based on the estimated performance includes:presenting an avatar of the second person that integrates the second person into an environment of the first person.5. The method of claim 4 , wherein the second person is exercising at a remote location from the first person.6. The method of claim 1 , wherein the accessing data for the second person for the second exercise routine different from the first exercise routine includes:accessing real time exercise data for the second person at a location that is remote from the personal see-through A/V apparatus.7. The method of claim 6 , ...

Подробнее
05-06-2014 дата публикации

DIRECT HOLOGRAM MANIPULATION USING IMU

Номер: US20140152558A1
Принадлежит:

Methods for controlling an augmented reality environment associated with a head-mounted display device (HMD) are described. In some embodiments, a virtual pointer may be displayed to an end user of the HMD and controlled by the end user using motion and/or orientation information associated with a secondary device (e.g., a mobile phone). Using the virtual pointer, the end user may select and manipulate virtual objects within the augmented reality environment, select real-world objects within the augmented reality environment, and/or control a graphical user interface of the HMD. In some cases, the initial position of the virtual pointer within the augmented reality environment may be determined based on a particular direction in which the end user is gazing and/or a particular object at which the end user is currently focusing on or has recently focused on. 1. A method for controlling an augmented reality environment associated with an HMD , comprising:detecting a triggering event corresponding with a virtual pointer mode of the HMD;determining an initial virtual pointer location in response to the detecting a triggering event;acquiring orientation information from a secondary device in communication with the HMD;updating the virtual pointer location based on the orientation information; anddisplaying a virtual pointer within the augmented reality environment corresponding with the virtual pointer location.2. The method of claim 1 , wherein:the determining an initial virtual pointer location includes determining a gaze direction associated with an end user of the HMD and setting the initial virtual pointer location based on the gaze direction.3. The method of claim 1 , wherein:the determining an initial virtual pointer location includes determining a gaze direction associated with an end user of the HMD, identifying one or more selectable objects within a field of view of the HMD, determining a selectable object of the one or more selectable objects closest to the ...

Подробнее
12-04-2018 дата публикации

Three-dimensional mixed-reality viewport

Номер: US20180101994A1
Принадлежит: Microsoft Technology Licensing LLC

An application running on a computing platform that employs three-dimensional (3D) modeling is extended using a virtual viewport into which 3D holograms are rendered by a mixed-reality head mounted display (HMD) device. The HMD device user can position the viewport to be rendered next to a real world 2D monitor and use it as a natural extension of the 3D modeling application. For example, the user can interact with modeled objects in mixed-reality and move objects between the monitor and the viewport. The 3D modeling application and HMD device are configured to exchange scene data for modeled objects (such as geometry, lighting, rotation, scale) and user interface parameters (such as mouse and keyboard inputs). The HMD device implements head tracking to determine where the user is looking so that user inputs are appropriately directed to the monitor or viewport.

Подробнее
21-07-2016 дата публикации

Mechanism to give holographic objects saliency in multiple spaces

Номер: US20160210789A1
Принадлежит: Individual

A system for allowing a virtual object to interact with other virtual objects across different spaces within an augmented reality (AR) environment and to transition between the different spaces is described. An AR environment may include a plurality of spaces, each comprising a bounded area or volume within the AR environment. In one example, an AR environment may be associated with a three-dimensional world space and a two-dimensional object space corresponding with a page of a book within the AR environment. A virtual object within the AR environment may be assigned to the object space and transition from the two-dimensional object space to the three-dimensional world space upon the detection of a space transition event. In some cases, a dual representation of the virtual object may be used to detect interactions between the virtual object and other virtual objects in both the world space and the object space.

Подробнее
06-08-2015 дата публикации

Synchronizing virtual actor's performances to a speaker's voice

Номер: US20150220231A1
Принадлежит: Individual

A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.

Подробнее
16-10-2014 дата публикации

HOLOGRAPHIC SNAP GRID

Номер: US20140306993A1
Принадлежит:

Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space. 1. An electronic device for generating an augmented reality environment , comprising:one or more processors, the one or more processors acquire one or more virtual objects associated with the augmented reality environment, the one or more virtual objects include a first virtual object, the one or more processors identify one or more snap grid spaces within the augmented reality environment, the one or more snap grid spaces include a first snap grid space, the one or more processors determine a grid spacing associated with the first snap grid space based on one or more properties of the first virtual object, the one or more processors assign the first virtual object to a position within the first snap grid space based on the grid spacing; anda see-through display in communication with the one or more processors, the see-through display displays one or more images such that the first virtual object is perceived to exist within the augmented reality environment at the position within the first snap grid space.2. The ...

Подробнее
27-11-2014 дата публикации

HOLOGRAM ANCHORING AND DYNAMIC POSITIONING

Номер: US20140347391A1
Принадлежит:

A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is moving through the mixed reality environment, the virtual objects may remain world-locked, so that the user can move around and explore the virtual objects from different perspectives. When the user is motionless in the mixed reality environment, the virtual objects may rotate to face the user so that the user can easily view and interact with the virtual objects. 1. A system for presenting a mixed reality experience to one or more users , the system comprising:one or more display devices for the one or more users, each display device including a display unit for displaying a virtual object to the user of the display device; anda computing system operatively coupled to the one or more display devices, the computing system generating the virtual object for display on the one or more display devices, the computing system displaying the virtual object to a user of the one or more users at a first position when the user is moving, and the computing system displaying the virtual object to the user at a second position rotated to face the user when the user is motionless.2. The system of claim 1 , wherein the computing system comprises at least one of a hub computing system and one or more processing units.3. The system of claim 1 , wherein the computing system displays the virtual object as rotating between the first and second positions at a predetermined angular velocity.4. The system of claim 1 , wherein the computing system displays the virtual object as rotating between the first and second positions upon the user being motionless for a predetermined period of time.5. The system of claim 1 , wherein the user is motionless and computing system displays the virtual object at the second position when the user's head is categorized as being motionless.6. The system of ...

Подробнее
18-12-2014 дата публикации

Virtual object orientation and visualization

Номер: US20140368532A1
Принадлежит: Individual

A method and apparatus for the creation of a perspective-locked virtual object having in world space. The virtual object may be consumed) by another user with a consumption device at a location, position, and orientation which is the same as, or proximate to, the location, position, and orientation where the virtual object is created. Objects may have one, few or many allowable consumption locations, positions, and orientations defined by its creator

Подробнее
18-12-2014 дата публикации

Multi-space connected virtual data objects

Номер: US20140368533A1
Принадлежит: Individual

A see-through head mounted display apparatus includes a display and a processor. The processor determines geo-located positions of points of interest within a field of view and generates markers indicating information regarding an associated real world object is available to the user. Markers are rendered in the display relative to the geo-located position and the field of view of the user. When a user selects a marker though a user gesture, the device displays a near-field virtual object having a visual tether to the marker simultaneously with the marker. The user may interact with the marker to view, add or delete information associated with the point of interest.

Подробнее
18-12-2014 дата публикации

CONCURRENT OPTIMAL VIEWING OF VIRTUAL OBJECTS

Номер: US20140368534A1
Принадлежит:

A see through head mounted display apparatus includes code performing a method of choosing and optimal viewing location and perspective for shared-view virtual objects rendered for multiple users in a common environment. Multiple objects and multiple users are taken into account in determining the optimal, common viewing location. The technology allows each user to have a common view if the relative position of the object in the environment. 1. A method for rendering a virtual object in a see-through head mounted display , comprisingdetermining position, orientation and field of view for at least a first user and a second user in a common environment;determining a common optimal positioning of at least one commonly viewed virtual object relative to the first user and the second user, the optimal positioning comprising a location defined by a local coordinate system; andsharing the object and position data in the common environment between the at least one user and the second user to allow rendering of the commonly viewed virtual object at the location.2. The method of wherein determining a common optimal positioning includes the step of:determining a common location between the first user and the second user;determining, from the object, orientation and position rules governing the positioning of the virtual object relative to a user; andcalculating an object position based on a relative angle to and distance from each user.3. The method of wherein the method includesdetermining a distance from at least the first user and the second user for the common optimal positioning;determining at least an acceptable viewing perspectives for the object determining;determining a maximum angle of movement between a common point and the common optimal positioning; andbased on said distance, acceptable viewing perspectives and maximum angle of movement, determining the location.4. The method of wherein each object includes an object definition including at least an optimal viewing ...

Подробнее
18-12-2014 дата публикации

HYBRID WORLD/BODY LOCKED HUD ON AN HMD

Номер: US20140368535A1
Принадлежит:

A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is not focused on the virtual object, which may be a heads-up display, or HUD, the HUD may remain body locked to the user. As such, the user may explore and interact with a mixed reality environment presented by the head mounted display device without interference from the HUD. When a user wishes to view and/or interact with the HUD, the user may look at the HUD. At this point, the HUD may change from a body locked virtual object to a world locked virtual object. The user is then able to view and interact with the HUD from different positions and perspectives of the HUD. 1. A system for presenting a mixed reality experience to one or more users , the system comprising:a display device including a display unit for displaying a virtual object; anda computing system operatively coupled to the display device, the computing system generating the virtual object for display on the display device, the computing system positioning the virtual object as being body locked with respect to the display device when it is determined the virtual object is not in a field of view of the display device and the computing system positioning the virtual object as being world locked when it is determined the virtual object is in the field of view of the display device.2. The system of claim 1 , wherein the computing system comprises at least one of a hub computing system or one or more processing units.3. The system of claim 1 , wherein the computing system switches the position of the virtual object from a body locked position to a world locked position when it is determined that the virtual object remains within the field of view of the display device for a predetermined period of time.4. The system of claim 1 , wherein the computing system switches the position of the virtual object from ...

Подробнее
18-12-2014 дата публикации

SHARED AND PRIVATE HOLOGRAPHIC OBJECTS

Номер: US20140368537A1
Принадлежит:

A system and method are disclosed for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects. Multiple users can collaborate together in interacting with the shared virtual objects. A private virtual object may be visible to a single user. In examples, private virtual objects of respective users may facilitate the users' collaborative interaction with one or more shared virtual objects. 1. A system for presenting a mixed reality experience , the system comprising:a first display device including a display unit for displaying virtual objects including a shared virtual object and a private virtual object; anda computing system operatively coupled to the first display device and a second display device, the computing system generating the shared and private virtual objects for display on the first display device, and the computing system generating the shared but not the private virtual object for display on a second display device.2. The system of claim 1 , wherein the shared virtual object and private virtual object are part of a single hybrid virtual object.3. The system of claim 1 , wherein the shared virtual object and private virtual object are separate virtual objects.4. The system of claim 1 , wherein interaction with the private virtual object affects a change in the shared virtual object.5. The system of claim 4 , wherein the change is a change in the content provided by the shared virtual object.6. The system of claim 4 , wherein the change is a change in the position of the shared virtual object.7. The system of claim 4 , wherein the change is a change in the appearance of the shared virtual object.8. The system of claim 2 , wherein the shared virtual object includes a virtual display slate having content displayed on the head mounted display device.9. A system for presenting a mixed reality experience claim 2 , the system comprising:a first display device including a display unit for ...

Подробнее
18-12-2014 дата публикации

MULTI-STEP VIRTUAL OBJECT SELECTION

Номер: US20140372957A1
Принадлежит:

A head mounted display allows user selection of a virtual object through multi-step focusing by the user. Focus on the selectable object is determined and then a validation object is displayed. When user focus moves to the validation object, a timeout determines that a selection of the validation object, and thus the selectable object has occurred. The technology can be used in see through head mounted displays to allow a user to effectively navigate an environment with a multitude of virtual objects without unintended selections. 1. A method for selecting a virtual object rendered in a see-through head mounted display , comprising:rendering one or more virtual objects, at least one virtual object being a selectable virtual object by a user to engage a function with respect to the object;determining gaze, head position and field of view for a user, the gaze, head position and field of view determining a user focus;if the focus of the user intersects a selectable virtual object, determining an initial selection of the selectable virtual object;displaying a validation object proximate to the selectable virtual object; anddetecting whether the focus of the user intersects the validation object thereby initiating a selection of the selectable object.2. The method of wherein the method further includes claim 1 , prior to displaying a validation object claim 1 , determining if the user focus remains on the selectable virtual object for a first time period.3. The method of wherein said detecting includes determining if the user focus remains on the validation object for a second time period.4. The method of further including determining the focus of the user focus leaves the selectable virtual object and initiating an unselection of the object.5. The method of wherein the focus of the user includes determining whether a user gaze intersects one of the selectable virtual object or the validation object.6. The method of wherein the focus of the user includes determining ...

Подробнее
06-10-2016 дата публикации

PERSONAL AUDIO/VISUAL SYSTEM

Номер: US20160292850A1
Принадлежит: Microsoft Technology Licensing, LLC

The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The system can be used in various entertainment, sports, shopping and theme-park situations to provide a mixed reality experience. 1. A method for generating an augmented reality environment using a mobile device , comprising:capturing images of an environment using the mobile device;determining an exercise being performed by an end user of the mobile device using the captured images;determining a performance of the end user during the exercise using the captured images;detecting that the end user has completed the exercise using the captured images; andgenerating and displaying metrics for the performance of the end user compared with a prior exercise history for the end user in response to detecting that the end user has completed the exercise.2. The method of claim 1 , further comprising:determining a location of the mobile device, the determining an exercise being performed by the end user includes determining the exercise based on the location of the mobile device.3. The method of claim 1 , wherein:the determining an exercise being performed by an end user of the mobile device includes identifying an exercise machine being used by the end user using the captured images.4. The method of claim 3 , wherein:the determining a performance of the end user includes determining a number of repetitions performed by the end user using the exercise machine.5. The method of claim 1 , wherein:the determining a performance of the end user includes determining a distance traveled by the end user using the captured images.6. The method of claim 1 , wherein:the determining a performance of the end user includes estimating a number of calories burned by the end user using the captured images.7. The method of claim 1 , wherein:the mobile device comprises a head mounted display device.8. An electronic device for generating an augmented ...

Подробнее
19-09-2019 дата публикации

GAZE-BASED OBJECT PLACEMENT WITHIN A VIRTUAL REALITY ENVIRONMENT

Номер: US20190286231A1
Принадлежит:

A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user's projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user's view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray's closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment. 1. A method performed by a head mounted display (HMD) device worn by a user in which the HMD device supports a virtual reality environment and is configured with one or more sensors , the method comprising:tracking the user's view position of the virtual reality environment in a field of view (FOV) using data from the one or more sensors that describes a real-world physical environment adjoining the user;projecting a gaze ray outward from the view position into the virtual reality environment;identifying a point of intersection between the projected gaze ray and the virtual reality environment that is nearest to the view position;receiving an input from the user that indicates an intended placement of a virtual object at the point of intersection;determining whether the intended placement causes an edge of the FOV to clip the virtual object;responsively to the determination, casting the gaze ray from the view position into the virtual reality environment through a portion of the FOV that is opposite to the clipping edge of the FOV; andrendering ...

Подробнее
23-02-2016 дата публикации

Virtual spectator experience with a personal audio/visual apparatus

Номер: US9268406B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.

Подробнее
15-03-2016 дата публикации

Personal audio/visual system for providing an adaptable augmented reality environment

Номер: US9285871B2
Принадлежит: Microsoft Technology Licensing LLC

A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.

Подробнее
15-03-2016 дата публикации

Representing a location at a previous time period using an augmented reality display

Номер: US9286711B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view.

Подробнее
28-03-2017 дата публикации

Personal audio/visual apparatus providing resource management

Номер: US9606992B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display.

Подробнее
27-11-2018 дата публикации

Virtual object orientation and visualization

Номер: US10139623B2
Принадлежит: Microsoft Technology Licensing LLC

A method and apparatus for the creation of a perspective-locked virtual object having in world space. The virtual object may be consumed) by another user with a consumption device at a location, position, and orientation which is the same as, or proximate to, the location, position, and orientation where the virtual object is created. Objects may have one, few or many allowable consumption locations, positions, and orientations defined by its creator.

Подробнее
24-05-2016 дата публикации

Enhancing a sport using an augmented reality display

Номер: US9345957B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar.

Подробнее
21-11-2013 дата публикации

Synchronizing virtual actor's performances to a speaker's voice

Номер: WO2013173531A1

A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.

Подробнее
08-01-2019 дата публикации

Hybrid world/body locked HUD on an HMD

Номер: US10175483B2
Принадлежит: Microsoft Technology Licensing LLC

A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is not focused on the virtual object, which may be a heads-up display, or HUD, the HUD may remain body locked to the user. As such, the user may explore and interact with a mixed reality environment presented by the head mounted display device without interference from the HUD. When a user wishes to view and/or interact with the HUD, the user may look at the HUD. At this point, the HUD may change from a body locked virtual object to a world locked virtual object. The user is then able to view and interact with the HUD from different positions and perspectives of the HUD.

Подробнее
17-12-2015 дата публикации

Shared and private holographic objects

Номер: AU2014281863A1
Принадлежит: Microsoft Technology Licensing LLC

A system and method are disclosed for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects. Multiple users can collaborate together in interacting with the shared virtual objects. A private virtual object may be visible to a single user. In examples, private virtual objects of respective users may facilitate the users' collaborative interaction with one or more shared virtual objects.

Подробнее
27-04-2016 дата публикации

Shared and private holographic objects

Номер: EP3011382A1
Принадлежит: Microsoft Technology Licensing LLC

A system and method are disclosed for displaying virtual objects in a mixed reality environment including shared virtual objects and private virtual objects. Multiple users can collaborate together in interacting with the shared virtual objects. A private virtual object may be visible to a single user. In examples, private virtual objects of respective users may facilitate the users' collaborative interaction with one or more shared virtual objects.

Подробнее
19-05-2015 дата публикации

Synchronizing virtual actor's performances to a speaker's voice

Номер: US9035955B2
Принадлежит: Microsoft Technology Licensing LLC

A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.

Подробнее
20-10-2015 дата публикации

Augmented books in a mixed reality environment

Номер: US9165381B2
Принадлежит: Microsoft Technology Licensing LLC

A system and method are disclosed for augmenting a reading experience in a mixed reality environment. In response to predefined verbal or physical gestures, the mixed reality system is able to answer a user's questions or provide additional information relating to what the user is reading. Responses may be displayed to the user on virtual display slates in a border or around the reading material without obscuring text or interfering with the user's reading experience.

Подробнее
19-01-2021 дата публикации

Virtual object orientation and visualization

Номер: CA2913650C
Принадлежит: Microsoft Technology Licensing LLC

A method and apparatus for the creation of a perspective-locked virtual object having in world space. The virtual object may be consumed) by another user with a consumption device at a location, position, and orientation which is the same as, or proximate to, the location, position, and orientation where the virtual object is created. Objects may have one, few or many allowable consumption locations, positions, and orientations defined by its creator.

Подробнее
22-11-2016 дата публикации

Sharing games using personal audio/visual apparatus

Номер: US9498720B2
Принадлежит: Microsoft Technology Licensing LLC

A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space.

Подробнее
07-08-2019 дата публикации

Virtual object orientation and visualization

Номер: EP3521981A1
Принадлежит: Microsoft Technology Licensing LLC

A method and apparatus for the creation of a perspective-locked virtual object having in world space. The virtual object may be consumed) by another user with a consumption device at a location, position, and orientation which is the same as, or proximate to, the location, position, and orientation where the virtual object is created. Objects may have one, few or many allowable consumption locations, positions, and orientations defined by its creator.

Подробнее
01-09-2015 дата публикации

Realistic occlusion for a head mounted augmented reality display

Номер: US9122053B2
Принадлежит: Microsoft Technology Licensing LLC

Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.

Подробнее
10-03-2016 дата публикации

Management of content in a 3d holographic environment

Номер: WO2016036625A1
Принадлежит: Microsoft Technology Licensing, LLC

Methods for managing content within an interactive augmented reality environment are described. An augmented reality environment may be provided to an end user of a head-mounted display device (HMD) in which content (e.g., webpages) may be displayed to the end user using one or more curved slates that are positioned on a virtual cylinder that appears body-locked to the end user. The virtual cylinder may be located around the end user with the end user positioned in the middle of the virtual cylinder such that the one or more curved slates appear to be displayed at the same distance from the end user. The position and size of each of the one or more curved slates may be controlled by the end user using head gestures and a virtual pointer projected onto the virtual cylinder.

Подробнее
26-07-2017 дата публикации

Management of content in a 3d holographic environment

Номер: EP3195105A1
Принадлежит: Microsoft Technology Licensing LLC

Methods for managing content within an interactive augmented reality environment are described. An augmented reality environment may be provided to an end user of a head-mounted display device (HMD) in which content (e.g., webpages) may be displayed to the end user using one or more curved slates that are positioned on a virtual cylinder that appears body-locked to the end user. The virtual cylinder may be located around the end user with the end user positioned in the middle of the virtual cylinder such that the one or more curved slates appear to be displayed at the same distance from the end user. The position and size of each of the one or more curved slates may be controlled by the end user using head gestures and a virtual pointer projected onto the virtual cylinder.

Подробнее
05-02-2015 дата публикации

Mixed reality graduated information delivery

Номер: WO2015017292A1
Принадлежит: MICROSOFT CORPORATION

Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a selected geo-located data item and provides a minimum visual information density level for the item to a head-mounted display device. The program receives via the head-mounted display device a user input corresponding to the selected geo-located data item. Based on the input, the program provides an increasing visual information density level for the selected item to the head-mounted display device for display within the mixed reality environment.

Подробнее