Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 21596. Отображено 100.
19-01-2012 дата публикации

Multi-resolution, multi-window disparity estimation in 3d video processing

Номер: US20120014590A1
Принадлежит: Qualcomm Inc

A disparity value between corresponding pixels in a stereo pair of images, where the stereo pair of images includes a first view and a second view of a common scene, can be determined based on identifying a lowest aggregated matching cost for a plurality of support regions surrounding the pixel under evaluation. In response to the number of support regions having a same disparity value being greater than a threshold number, a disparity value indicator for the pixel under evaluation can be set to the same disparity value.

Подробнее
29-03-2012 дата публикации

Actuated adaptive display systems

Номер: US20120075166A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Adjustable, adaptive display system having individual display elements is able to change its configuration based on a user's movements, position, and activities. A method of adjusting a display system tracking a user is tracked using a camera or other tracking sensor, thereby creating user-tracking data. The user-tracking data is input to an actuator signal module which generates input signals for one or more actuators. The input signals are created, in part, from the user-tracking data. Two or more display elements are actuated using the one or more actuators based on the input signals. The display elements may be planar or curved. In this manner, a configuration of the display system adapts to user movements and adjusts systematically. This provides for a greater amount of a user's human visual field (or user FOV) to be filled by the display system.

Подробнее
29-03-2012 дата публикации

Lapel microphone micro-display system incorporating mobile information access

Номер: US20120075177A1
Принадлежит: Kopin Corp

A shoulder mounted lapel microphone housing that encloses a microdisplay, a computer, and other communication system components. A microdisplay element is located on or in the microphone housing. Other electronic circuits, such as a microcomputer, one or more wired and wireless interfaces, associated memory or storage devices, auxiliary device mounts and the like are packaged in the microphone housing and/or in an optional pager sized gateway device having a belt clip. Motion, gesture, and/or audio processing circuits in the system provide a way for the user to input commands to the system without a keyboard or mouse. The system provides connectivity to other computing devices such as cellular phones, smartphones, laptop computers, or the like.

Подробнее
19-04-2012 дата публикации

User Fatigue

Номер: US20120092172A1
Принадлежит: Hewlett Packard Development Co LP

Detect a head position of a user viewing a display device with a sensor, determine a duration of the user viewing the display device, identify a user fatigue in response to the head position, and provide a response to the user with the display device based on the user fatigue and the duration of the user viewing the display device.

Подробнее
10-05-2012 дата публикации

Rotate and Hold and Scan (RAHAS) Structured Light Illumination Pattern Encoding and Decoding

Номер: US20120113229A1

A unique computer-implemented process, system, and computer-readable storage medium having stored thereon, executable program code and instructions for 3-dimentional (3-D) image acquisition of a contoured surface-of-interest under observation by at least one camera and employing a preselected SLI pattern. The system includes a 3-D video sequence capture unit having (a) one or more image-capture devices for acquiring video image data as well as color texture data of a 3-D surface-of-interest, and (b) a projector device for illuminating the surface-of-interest with a preselected SLI pattern in, first, an initial Epipolar Alignment and ending in alignment with an Orthogonal (i.e., phase) direction, and then shifting the preselected SLI pattern (‘translation’). A follow-up ‘post-processing’ stage of the technique includes analysis and processing of the 3-D video sequence captured, including the steps of: identification of Epipolar Alignment from the 3-D video sequence, tracking ‘snakes’/stripes from the initial Epipolar Alignment through alignment with the Orthogonal direction, tracking ‘snakes’/stripes through pattern shifting/translation; correcting for relative motion (object motion); determining phase and interpolating adjacent frames to achieve uniform phase shift; employing conventional PMP phase processing to obtain wrapped phase; unwrapping phase at each pixel using snake identity; using conventional techniques to map phase to world coordinates.

Подробнее
24-05-2012 дата публикации

Web Camera Device and Operating Method thereof

Номер: US20120127325A1
Автор: Kun-Hui Lai
Принадлежит: Inventec Corp

A web camera device includes a web camera and a micro processing unit (MCU). The web camera is operable to capture an image of a user. The MCU is electrically connected to the web camera and includes a distance calculating module and a comparing module. The distance calculating module is operable to receive a distance between the user and the web camera based on the calculating of the image. The comparing module is operable to compare the distance with a predetermined distance range, wherein when the distance is beyond the predetermined distance range, the comparing module outputs a warning signal for alarm the user to keep a proper distance from the display. An operating method of the web camera device is disclosed herein.

Подробнее
31-05-2012 дата публикации

Human-computer interaction device and an apparatus and method for applying the device into a virtual world

Номер: US20120133581A1
Принадлежит: International Business Machines Corp

A human-computer interaction device and an apparatus and method for applying the device into a virtual world. The human-computer interaction device is disposed with a sensing device thereon, the sensing device including a manipulation part and a distance sensor. The manipulation part receives a manipulation action of a user's finger, the distance sensor senses a distance of the manipulation part relative to a fixed location and generates a distance signal for characterizing the manipulation action. A virtual world assistant apparatus and a method corresponding to the assistant apparatus is also provided. With the invention, multiple signals of manipulation can be sensed and free control on actions of an avatar can be realized by using the multiple signals.

Подробнее
07-06-2012 дата публикации

Information display system, information display apparatus and non-transitory storage medium

Номер: US20120139941A1
Принадлежит: Casio Computer Co Ltd

The matching processor acquires the key information such as position information and/or the like by the key information acquirer, and notifies to an external apparatus via the communicator. The matching processor stores the acquired key information in the storer corresponding to the identification information which is acquired from the external apparatus. When the acquired identification information does not exist in a AR data management table, the matching processor acquires the AR data by notifying the identification information to the external apparatus. When there is no empty record in a key information management table, key information of which usage date and time is old is considered as a deletion target. Moreover, when there is no empty record in the AR data management table, the AR data of which usage date and time is old is considered as the deletion target.

Подробнее
07-06-2012 дата публикации

Pattern projection and imaging using lens arrays

Номер: US20120140094A1
Принадлежит: PRIMESENSE LTD

A method for projection includes generating a pattern of illumination, and positioning an array of lenses so as to project different, respective parts of the pattern onto a scene.

Подробнее
21-06-2012 дата публикации

User controlled device for sending control signals to an electric appliance, in particular user controlled pointing device such as mouse or joystick, with 3d-motion detection

Номер: US20120154275A1

A user controlled device, movable into a plurality of positions of a three-dimensional space, includes a MEMS acceleration sensor to detect 3D movements of the user controlled device. The device, such as a mouse, sends control signals correlated to the detected positions to an electrical appliance, such as a computer system. A microcontroller processes the output signals of the MEMS acceleration sensor to generate the control signals, such as screen pointer position signals and “clicking” functions.

Подробнее
12-07-2012 дата публикации

Multi-sample resolving of re-projection of two-dimensional image

Номер: US20120176368A1
Автор: Barry M. GENOVA
Принадлежит: Sony Computer Entertainment America LLC

Multi-sample resolution of a re-projection of a two-dimensional image is disclosed. One or more samples of a two-dimensional image are identified for each pixel in the three-dimensional re-projection. One or more sample coverage amounts are determined for each pixel of the re-projection. Each coverage amount identifies an area of the pixel covered by the corresponding two-dimensional sample. A final value is resolved for each pixel of the re-projection by combining each two-dimensional sample associated with the pixel in accordance with its weighted sample coverage amount.

Подробнее
02-08-2012 дата публикации

Method, system and controller for sharing data

Номер: US20120194465A1
Принадлежит: Individual

A method is provided for a user of a communications device sharing data items with one or more of a plurality of data recipients, comprising the steps of: selecting one or more data items to share ( 1201 ); displaying symbols in a two-dimensional geometrical space on a display ( 1204 ), at least some of which represent individuals or groups of the data recipients; selecting one or more of the symbols as destinations for the data item(s) ( 1202 ); and sharing the data item(s) with the destination (s) ( 1203 ).

Подробнее
02-08-2012 дата публикации

Reducing Interference Between Multiple Infra-Red Depth Cameras

Номер: US20120194650A1
Принадлежит: Microsoft Corp

Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.

Подробнее
02-08-2012 дата публикации

Correlating areas on the physical object to areas on the phone screen

Номер: US20120195461A1
Принадлежит: Qualcomm Inc

A mobile platform renders an augmented reality graphic to indicate selectable regions of interest on a captured image or scene. The region of interest is an area that is defined on the image of a physical object, which when selected by the user can generate a specific action. The mobile platform captures and displays a scene that includes an object and detects the object in the scene. A coordinate system is defined within the scene and used to track the object. A selectable region of interest is associated with one or more areas on the object in the scene. An indicator graphic is rendered for the selectable region of interest, where the indicator graphic identifies the selectable region of interest.

Подробнее
09-08-2012 дата публикации

Autostereoscopic Rendering and Display Apparatus

Номер: US20120200495A1
Принадлежит: Nokia Oyj

An apparatus comprising a sensor configured to detect the position and orientation of a user viewpoint with respect to an auto-stereoscopic display; a processor configured to determine a surface viewable from the user viewpoint of at least one three dimensional object; and an image generator configured to generate a left and right eye image for display on the auto-stereoscopic display dependent on the surface viewable from the user viewpoint.

Подробнее
23-08-2012 дата публикации

Computer-readable storage medium having display control program stored therein, display control apparatus, display control system, and display control method

Номер: US20120212580A1
Принадлежит: Nintendo Co Ltd

A display control apparatus displays, by a virtual stereo camera taking an image of a virtual three-dimensional space in which a player object is positioned, a stereoscopically viewable image of the virtual three-dimensional space. At this time, when an object distance represents a distance from a point of view position of the virtual stereo camera to the player object, and a stereoscopic view reference distance represents a distance from the point of view position of the virtual stereo camera to a reference plane corresponding to a position at which a parallax is not generated when the image of the virtual three-dimensional space is taken by the virtual stereo camera, a camera parameter is set based on a stereoscopic view ratio which is a ratio of the stereoscopic view reference distance to the object distance, The stereoscopically viewable image is generated based on the camera parameter.

Подробнее
27-09-2012 дата публикации

Apparatus for obtaining 3-dimensional content

Номер: US20120242829A1
Автор: Yungwoo Jung, Yunsup Shin
Принадлежит: LG ELECTRONICS INC

An apparatus for obtaining 3D content is provided. The apparatus comprises: a lighting unit for outputting a lighting pattern having coordinate information; a depth sensor for receiving a returning beam output from the lighting unit and reflected from an object; a 2D image capturing unit for obtaining a second-dimensional image; a data processor for calculating the depth of each region using the distribution of characteristics represented on an image obtained by the depth sensor, and processing the 2D image data obtained by the 2D image capturing unit and the calculated depth data and encoding the same according to a predetermined format; and a controller for controlling the lighting unit, the 2D image capturing unit, and the data processor. The lighting unit comprises a light source and an optical element comprising a plurality of sub-grid regions for modulating beams coming from the light source.

Подробнее
04-10-2012 дата публикации

Virtual pointer

Номер: US20120249531A1
Автор: Håkan Jonsson
Принадлежит: SONY MOBILE COMMUNICATIONS AB

An electronic device for enabling display of a virtual object representing a real object on a remote display may be placed in a first position and first orientation. The virtual object is displayed on the remote display. The electronic device generates a virtual beam, which is displayed on the remote display, and directs the virtual beam towards the virtual object by moving the electronic device from the first position/first orientation to a second position and second orientation. The electronic device determines the second position/second orientation in relation to the virtual beam. The electronic device selects the virtual object to which the virtual beam is directed based on the determined orientation and position. The electronic device transfers information about the selected virtual object to the remote display. The information enables display of the selected virtual object or a representation of the selected virtual object on the remote display.

Подробнее
25-10-2012 дата публикации

Wireless Head Set for Lingual Manipulation of an Object, and Method for Moving a Cursor on a Display

Номер: US20120268370A1
Автор: Youhanna Al-Tawil
Принадлежит: Individual

A head set is provided. The head set is beneficial for assisting an individual who is significantly impaired in the use of his or her upper extremities. The system enables this individual to move a cursor on a display of a computer or other processing device using lingual musculature. The head set includes a head piece. The head piece supports an articulating arm. The articulating arm supports a mouthpiece at a distal end. The mouthpiece has a plurality of cells embedded therein. The cells are configured to receive pressure applied by the tongue of the user. Movement of the tongue over and against the cells causes the cursor to be moved on the display. A method for moving a cursor on a display using a mouthpiece controlled through lingual movement is also provided. In addition, a method of typing characters on a virtual keyboard using lingual musculature is offered.

Подробнее
25-10-2012 дата публикации

Augmented reality extrapolation techniques

Номер: US20120268490A1
Автор: Benjamin J. Sugden
Принадлежит: Microsoft Corp

Augmented reality extrapolation techniques are described. In one or more implementations, an augmented-reality display is rendered based at least in part on a first basis that describes a likely orientation or position of at least a part of the computing device. The rendered augmented-reality display is updated based at least in part on data that describes a likely orientation or position of the part of the computing device that was assumed during the rendering of the augmented-reality display.

Подробнее
22-11-2012 дата публикации

Image processing system, image processing method, and program

Номер: US20120293693A1
Автор: Hironori Sumitomo
Принадлежит: KONICA MINOLTA INC

An objective of the present invention is to provide a technique capable of generating a virtual viewpoint image without causing any visually uncomfortable feeling. In order to achieve this objective, a first image obtained by being captured from a first viewpoint at a first image capture time, and a second image obtained by being captured at a second image capture time different from the first image capture time are acquired. To each of pixels in a non-image capture area corresponding to a portion of a subject not captured in the first image of a first virtual viewpoint image that is generated in a pseudo manner based upon the first image and can be acquired by being captured from a first virtual viewpoint different from the first viewpoint, a pixel value is added in accordance with the second image.

Подробнее
20-12-2012 дата публикации

Information processing apparatus, information processing method, and program

Номер: US20120320088A1
Принадлежит: NS Solutions Corp

An information processing apparatus including an imaged image input unit inputting an imaged image of a facility imaged in an imaging device to a display control unit, a measurement information input unit inputting measurement information measured by a sensor provided in the facility from the sensor to a creation unit, a creation unit creating a virtual image representing a status of an outside or inside of the facility based on the measurement information input by the measurement information input unit, and a display control unit overlaying and displaying the virtual image created in the creation unit and the imaged image input by the imaged image input unit on a display device.

Подробнее
27-12-2012 дата публикации

Motion capture from body mounted cameras

Номер: US20120327194A1

Body-mounted cameras are used to accurately reconstruct the motion of a subject. Outward-looking cameras are attached to the limbs of the subject, and the joint angles and root pose that define the subject's configuration are estimated through a non-linear optimization, which can incorporate image matching error and temporal continuity of motion. Instrumentation of the environment is not required, allowing for motion capture over extended areas and in outdoor settings.

Подробнее
10-01-2013 дата публикации

Three-dimensional image capturing apparatus and three-dimensional image capturing method

Номер: US20130010077A1
Принадлежит: Panasonic Corp

A three-dimensional image capturing apparatus generates depth information to be used for generating a three-dimensional image from an input image, and includes: a capturing unit obtaining the input image in capturing; an object designating unit designating an object in the input image; a resolution setting unit setting depth values, each representing a different depth position, so that in a direction parallel to a depth direction of the input image, depth resolution near the object is higher than depth resolution positioned apart from the object, the object being designated by the object designating unit; and a depth map generating unit generating two-dimensional depth information corresponding to the input image by determining, for each of regions in the input image, a depth value, from among the depth values set by the resolution setting unit, indicating a depth position corresponding to one of the regions.

Подробнее
10-01-2013 дата публикации

Imaging apparatus

Номер: US20130010169A1
Автор: Eiji ANNO, Takayuki Tochio
Принадлежит: Panasonic Corp

An imaging apparatus includes an imaging sensor configured to capture a subject image to output image data, a display unit configured to display an image based on the output image data, a touch panel configured to receive an operation by user's touching, the touch panel arranged on the display unit, and a controller configured to control enabling/disabling of the touch operation on the touch panel based on an output from the imaging sensor.

Подробнее
14-02-2013 дата публикации

System, method, and recording medium for controlling an object in virtual world

Номер: US20130038601A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A system and method of controlling characteristics of an avatar in a virtual world may generate avatar control information based on avatar information of the virtual world and a sensor control command expressing a user intent using a sensor-based input device.

Подробнее
21-02-2013 дата публикации

Location based skins for mixed reality displays

Номер: US20130044129A1
Принадлежит: Individual

The technology provides embodiments for providing a location-based skin for a see-through, mixed reality display device system. In many embodiments, a location-based skin includes a virtual object viewable by a see-through, mixed reality display device system which has been detected in a specific location. Some location-based skins implement an ambient effect. The see-through, mixed reality display device system is detected to be present in a location and receives and displays a skin while in the location in accordance with user settings. User data may be uploaded and displayed in a skin in accordance with user settings. A location may be a physical space at a fixed position and may also be a space defined relative to a position of a real object, for example, another see-through, mixed reality display device system. Furthermore, a location may be a location within another location.

Подробнее
07-03-2013 дата публикации

Information processing apparatus and method, information processing system, and providing medium

Номер: US20130057584A1
Автор: Junichi Rekimoto
Принадлежит: Sony Corp

The invention enables users to virtually attach information to situations in the real world, and also enables users to quickly and easily find out desired information. An IR sensor receives an IR signal transmitted from an IR beacon, and supplies the received signal to a sub-notebook PC. A CCD video camera takes in a visual ID from an object, and supplies the inputted visual ID to the sub-notebook PC. A user inputs, through a microphone, a voice to be attached to situations in the real world. The sub-notebook PC transmits position data, object data and voice data, which have been supplied to it, to a server through a communication unit. The transmitted data is received by the server via a wireless LAN. The server stores the received voice data in a database in correspondence to the position data and the object data.

Подробнее
14-03-2013 дата публикации

Apparatus and method for generating depth information

Номер: US20130063430A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus for generating depth information includes: a receiver which receives a two-dimensional (2D) image signal including a plurality of frames; a user input unit; a user interface (UI) generator which generates a tool UI to input guide information for generating depth information; a display unit which displays a frame for which depth information is generated among the plurality of frames, and the generated tool UI; and a depth information generator which generates depth information corresponding to the guide information input by the user input unit through the tool UI.

Подробнее
21-03-2013 дата публикации

Linking programmatic actions to user actions at different locations

Номер: US20130073956A1
Принадлежит: Hewlett Packard Development Co LP

A method for operating a computing device is disclosed, where data that associates a user action at a predetermined location with a programmatic action is stored in memory. A user action being performed at the predetermined location is detected, and the corresponding programmatic action is performed in response to detecting the user action being performed at the predetermined location.

Подробнее
21-03-2013 дата публикации

Recognizing User Intent In Motion Capture System

Номер: US20130074002A1
Принадлежит: MICROSOFT CORPORATION

Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. 1. Tangible computer readable storage device having computer readable software embodied thereon for programming a processor to perform a method for recognizing an intent of a person to engage with an application in a motion capture system , the method comprising:receiving images of a field of view of the motion capture system;based on the images, distinguishing a person's body;based on the distinguishing, determining a probabilistic measure of an intent by the person to engage with the application;based on the probabilistic measure of the intent by the person to engage with the application, determining that the person does not intend to engage with the application at a first time and determining that the person intends to engage with the application at a second time;in response to determining that the person intends to engage with the application, allowing the person to engage with the application by automatically associating a profile and an avatar with the person in the application, and displaying the avatar in a virtual space on a display; andupdating the display by controlling the avatar as the person engages with the application by moving the ...

Подробнее
28-03-2013 дата публикации

Holographic Enterprise Network

Номер: US20130076740A1
Принадлежит: International Business Machines Corp

A solution for implementing a holographic enterprise network is provided. The solution can provide an interface between an operations center and a three dimensional (3D) virtual simulator system capable of rendering holographic images of the operations center. A holographic enterprise interface can translate standard enterprise data associated with the operations center and 3D holographic data. Parallel communications between the holographic enterprise interface and a 3D data processing infrastructure having a holographic bus also can be managed.

Подробнее
28-03-2013 дата публикации

Electronic device capable of selecting and playing files based on facial expressions and method thereof

Номер: US20130077834A1
Автор: Young-Way Liu
Принадлежит: Hon Hai Precision Industry Co Ltd

An electronic device includes a storage unit, a capturing unit and a processing unit. The storage unit stores a plurality of files, and a relationship table between facial expressions and the plurality of files. The processing unit includes a controlling module, a facial image acquiring module, a facial expression identifying module, a file acquiring module and a file playing module. The controlling module controls the capturing unit to capture a predetermined number of facial images. The facial image acquiring module acquires a clear facial image from the predetermined number of facial images. The facial expression identifying module identifies a facial expression of the clear facial image. The file acquiring module identifies whether there is at least one of the files matching with the identified facial expression, and acquires the at least one matching file. The file playing module opens and/or plays the at least matching file.

Подробнее
04-04-2013 дата публикации

VIDEO GAME APPARATUS, VIDEO GAME CONTROLLING PROGRAM, AND VIDEO GAME CONTROLLING METHOD

Номер: US20130084982A1
Автор: SUZUKI Makoto

A video game apparatus includes a depth sensor configured to capture an area where a player exists and acquire depth information for each pixel of the image; and a gesture recognition unit configured to divide the image into a plurality of sections, to calculate statistics information of the depth information for each of the plurality of sections, and to recognize a gesture of the player based on the statistics information. 1. A video game apparatus comprising:a depth sensor configured to capture an area where a player exists and acquire depth information for each pixel of the image; anda gesture recognition unit configured to divide the image into a plurality of sections, to calculate statistics information of the depth information for each of the plurality of sections, and to recognize a gesture of the player based on the statistics information.2. The video game apparatus according to claim 1 ,wherein the gesture recognition unit calculates an area center of a silhouette of the player in the image and divides the image into the plurality of sections.3. The video game apparatus according to claim 1 ,wherein the gesture recognition unit prompts the player to take a plurality of postures and calculates a correction parameter for individual postures.4. A video game controlling program causing a computer to function steps of:capturing, by a depth sensor of a video game apparatus, an area where a player exists and acquiring, by the depth sensor, depth information for each pixel of the image; anddividing, by a gesture recognition unit of the video game apparatus, the image into a plurality of sections, calculating, by the gesture recognition unit, statistics information of the depth information for each of the plurality of sections, and recognizing, by the gesture recognition unit, a gesture of the player based on the statistics information.5. A video game controlling method comprising:capturing, by a depth sensor of a video game apparatus, an area where a player exists ...

Подробнее
11-04-2013 дата публикации

MULTIMODAL COMMUNICATION SYSTEM

Номер: US20130090931A1
Принадлежит: GEORGIA TECH RESEARCH CORPORATION

The present invention, in various embodiments, comprises systems and methods for providing a communication system. In one embodiment, the system is an assistive technology (AT) in a single, highly integrated, multimodal, multifunctional, multipurpose, minimally invasive, unobtrusive, wireless, wearable, easy to use, low cost, and reliable AT that can potentially provide people with severe disabilities with flexible and effective computer access and environmental control in various conditions. In one embodiment, a multimodal Tongue Drive System (mTDS) is disclosed that uses tongue motion as its primary input modality. Secondary input modalities including speech, head motion, and diaphragm control are added to the tongue motion as additional input channels to enhance the system speed, accuracy, robustness, and flexibility, which are expected to address many of the aforementioned issues with traditional ATs that have limited number of input channels/modalities and can only be used in certain conditions by a certain group of users. 1. A multi-modal communication system for use by a subject , the system comprising: a tracer unit for use on a tongue of the subject;', 'a sensing unit comprising a primary sensor configured for placement in proximity to tongue carrying the tracer unit, wherein the primary sensor detects a position of the tracer unit to output a first type of communication; and, 'a tongue tracking unit comprising, 'a primary modality comprisinga plurality of secondary modalities comprising one or more secondary sensors to output a second type of communication.2. The system of claim 1 , wherein the first type of communication is proportional and the one or second types of communications are discrete.3. The system of claim 1 , wherein the first type of communication is discrete and the second type of communication is proportional.4. The system of claim 1 , wherein the tracer unit comprises a magnet.5. The system of claim 4 , wherein the magnet is coated with a ...

Подробнее
18-04-2013 дата публикации

METHOD AND APPARATUS FOR PROCESSING VIRTUAL WORLD

Номер: US20130093665A1
Принадлежит:

A virtual world processing apparatus and method. An angle value is obtained by measuring an angle of a body part of a user of a real world using sensor capability, which is information on capability of a bending sensor, and is transmitted to a virtual world, thereby achieving interaction between the real world and the virtual world. In addition, based on the sensor capability and the angle value denoting the angle of the body part, control information is generated to control a part of an avatar of the virtual world, corresponding to the body part, and then transmitted to the virtual world. Accordingly, interaction between the real world and the virtual world is achieved. 1. A virtual world processing apparatus comprising:a receiving unit to receive an angle value, sensed by a bending sensor, of at least one sensed location and sensor capability of the bending sensor;a processing unit to generate control information for controlling an object of a virtual world corresponding to the at least one sensed location, based on the received angle value and the sensor capability; anda transmission unit to transmit the control information to the virtual world.2. The virtual world processing apparatus of claim 1 , wherein the sensor capability comprises a maximum value and a minimum value of the angle value measurable by the bending sensor.3. The virtual world processing apparatus of claim 2 , wherein the processing unit generates the control information when the angle value is less than or equal to the maximum value and is greater than or equal to the minimum value.4. The virtual world processing apparatus of claim 1 , wherein the sensor capability comprises a number of the at least one sensed location.5. The virtual world processing apparatus of claim 1 , wherein the sensor capability comprises a distance between the at least one sensed location.6. The virtual world processing apparatus of claim 1 , wherein the received angle value is a sum total of a plurality of angle values ...

Подробнее
25-04-2013 дата публикации

METHOD OF DETECTING AND TRACKING MULTIPLE OBJECTS ON A TOUCHPAD USING A DATA COLLECTION ALGORITHM THAT ONLY DETECTS AN OUTER EDGE OF THE OBJECTS AND THEN ASSUMES THAT THE OUTER EDGES DEFINE A SINGLE LARGE OBJECT

Номер: US20130100056A1
Принадлежит: Cirque Corporation

A system and method for detecting and tracking multiple objects on a touchpad or touchscreen, wherein the method provides a new data collection algorithm, wherein the method reduces a calculation burden on a processor performing detection and tracking algorithms, wherein multiple objects are treated as elements of a single object and not as separate objects, wherein the location of the objects are treated as end-points of a single object when two objects are detected, and treated as a perimeter or boundary when more than two objects are detected. 14-. (canceled)5. A system for detecting a plurality of objects on a touch sensitive surface , said system comprised of:a plurality of electrodes forming a sensor grid for detecting a presence of at least two objects on the touch sensitive surface, said plurality of electrodes forming a quadrilateral having four edges;means for detecting a presence of at least two objects on the touch sensitive surface;means for collecting data beginning from each of the four edges and moving towards an opposite edge and for stopping data collection when an object is detected; andmeans for determining the boundaries of a perimeter formed by the at least two objects by using the collected data from each of the four edges.6. The system as defined in wherein the plurality of electrodes forming a sensor grid are further comprised of:a plurality of parallel X electrodes disposed in a first plane which can function as either drive or sense electrodes;a plurality of parallel Y electrodes disposed in a second plane, wherein the Y electrodes are co-planar with and orthogonal to the X electrodes, and which can function as either drive or sense electrodes; andwherein the system transmits a drive signal on the plurality of electrodes that are functioning as drive electrodes, and detects the presence of the at least two objects from the plurality of electrodes that are functioning as sense electrodes.7. A system for detecting a plurality of objects on a ...

Подробнее
25-04-2013 дата публикации

Depth Cursor and Depth Measurement in Images

Номер: US20130100114A1
Автор: James D. Lynch
Принадлежит: Here Global BV

One or more systems, devices, and/or methods for illustrating depth are disclosed. For example, a method includes receiving a depthmap generated from an optical distancing system. The depthmap includes depth data for each of a plurality of points, which are correlated to pixels of an image. Data indicative of a location on the image is received. Depth data correlated with the first point is compared to depth data correlated with pixels at surrounding points in the image. If the depth data correlated with the first point indicate a lesser distance from a viewer perspective of the image than the depth data of a pixel at the surrounding points in the image, the pixel is changed to a predetermined value. The comparison may be repeated at other pixels and a depth illustration may be drawn that relates the depth of the received location to other objects in the image.

Подробнее
02-05-2013 дата публикации

Geolocation system and method for determining mammal locomotion movement

Номер: US20130110456A1
Автор: Solinsky James C.
Принадлежит:

An example geolocation system for mounting on a mammal incorporates simple sensing sleeves on the calves of the body support members, combined with an accelerometer based gravity direction and force sensing at the center of mass of the body. The example system is connected to a digital processing unit and a battery power supply to integrate the sensing to determine kinetic and potential energy of the body locomotion over time in a method that integrates out the aperiodic motion of the body about the center of mass, and uses the residual motion to measure the center of mass locomotion from a known point. 1. A geolocation system for mounting on a mammal , comprising:muscular force sensors for measuring muscular force exerted by support members of the mammal;movement sensors for sensing movement in the Earth's magnetic field;gravity sensors for sensing gravity forces at the center of mass of the body of the mammal; anda processing system for using outputs of the muscular force sensors, the movement sensors and the gravity sensors to determine movement of the mammal.2. The system according to claim 1 , wherein the muscular force sensors comprise calf muscle sensors provided as sleeves including interwoven claim 1 , elastic-resistive strips.3. The system according to claim 2 , wherein the movement sensors comprise magneto-resistive strips in the sleeves.4. The system according to claim 1 , wherein the force sensors comprise accelerometers disposed at the mass center of the mammal. This application is a divisional of application Ser. No. 12/581,875, filed Oct. 19, 2009, which is a divisional of Ser. No. 11/878,319, filed Jul. 23, 2007 now U.S. Pat. No. 7,610,166 and claims the benefit under 35 U.S.C. Section 119 of provisional application No. 60/832,129, filed Jul. 21, 2006. The contents of the provisional application are incorporated herein in their entirety.This application relates to self-locating the position of a mammal body in Earth-based coordinates, referenced to ...

Подробнее
09-05-2013 дата публикации

System and method for sensing human activity by monitoring impedance

Номер: US20130113506A1
Принадлежит: Disney Enterprises Inc

A system for sensing human activity by monitoring impedance includes a signal generator for generating an alternating current (AC) signal, the AC signal applied to an object, a reactance altering element coupled to the AC signal, an envelope generator for converting a returned AC signal to a time-varying direct current (DC) signal, and an analog-to-digital converter for determining a defined impedance parameter of the time-varying DC signal, where the defined impedance parameter defines an electromagnetic resonant attribute of the object.

Подробнее
09-05-2013 дата публикации

Maintenance of Three Dimensional Stereoscopic Effect Through Compensation for Parallax Setting

Номер: US20130113784A1
Автор: Anning Hu, Thomas White
Принадлежит: Autodesk Inc

Maintaining a three dimensional stereoscopic effect may include determining a distance between a position of a virtual camera and a first center of interest of a three dimensional image, calculating a scaling factor based on the distance, and compensating for a parallax setting associated with a second center of interest within the three dimensional image by applying the scaling factor when generating the three dimensional image to maintain the three dimensional effect.

Подробнее
09-05-2013 дата публикации

CAMERA AS INPUT INTERFACE

Номер: US20130116007A1
Принадлежит: Apple Inc.

A portable handheld electronic device contains a camera lens and accelerometer to allow a user to control voicemail and call features by swiping his finger across the camera lens and/or tapping the device. Therefore, the user can comfortably input commands into the device with a single hand and without needing to move the phone away from his ear to apply these inputs. In another embodiment, the camera lens can also be used to control navigation of the display screen or a displayed document of the device. For example, if a user wishes to shift a scrollbar for a page displayed on the screen downwards to view the bottom of the page, the user should move his finger over the camera lens in an analogous downward direction. 1. A portable electronic device comprising:a camera lens located on a rear face of the apparatus;a finger swipe detector to detect a finger swiping action across the camera lens;a display screen located on a front face of the apparatus; anda user interface component to cause the display screen to display a scrolling operation based on the finger swiping action.2. The device of claim 1 , wherein the scrolling operation to move a handle location of the display screen in any direction on the display screen.3. The device of claim 2 , wherein the handle location navigates a movement of the display screen.4. A cellular phone comprising:an accelerometer component to detect tapping;a tap component to determine a single tap and a double tap of the apparatus from the tapping;a gesture mapper to translate the single tap into a first command and to translate the double tap into a second command; anda telephone component to implement the first command and the second command.5. The cellular phone of claim 4 , wherein the first command is to merge a first call and a second call claim 4 , and wherein the second command is to put the first call on hold and to answer the second call.6. The cellular phone of claim 4 , wherein the first command and the second command are ...

Подробнее
16-05-2013 дата публикации

Hand-Location Post-Process Refinement In A Tracking System

Номер: US20130120244A1
Автор: Lee Johnny Chung
Принадлежит: MICROSOFT CORPORATION

A tracking system having a depth camera tracks a user's body in a physical space and derives a model of the body, including an initial estimate of a hand position. Temporal smoothing is performed in which some latency is imposed when the initial estimate moves by less than a threshold level from frame to frame, while little or no latency is imposed when the movement is more than the threshold. The smoothed estimate is used to define a local volume for searching for a hand extremity to define a new hand position. Another process generates stabilized upper body points that can be used as reliable reference positions, such as by detecting and accounting for occlusions. The upper body points and a prior estimated hand position are used to define an arm vector. A search is made along the vector to detect a hand extremity to define a new hand position. 1. Tangible computer readable storage device having computer readable software embodied thereon for programming a processor to perform a method for tracking user movement in a motion capture system , the method comprising:tracking a body in a field of view of the motion capture system, including obtaining a 3-D depth image and determining a 3-D skeletal model of the body;for one point in time, identifying a location of a hand of the 3-D skeletal model in the field of view; and identifying a reference point of the 3-D skeletal model;', 'defining at least one vector from the reference point in the next point in time to the location of the hand in the one point in time;', 'traversing the at least one vector to look for a most probable location of the hand in the next point in time, including scoring candidate locations which are part of the 3-D skeletal model based on their distance along the at least one vector and their distance perpendicularly from the at least one vector;', 'based on the most probable location of the hand, defining a volume in the field of view;', 'searching the 3-D depth image in the volume to determine a ...

Подробнее
16-05-2013 дата публикации

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND PROGRAM

Номер: US20130120449A1
Принадлежит:

A failure analysis apparatus obtains information associated with an operational status of a data center, determines information regarding fault repair work for the data center, based on the information associated with the operational status, and transmits the information regarding the fault repair work to an HMD. The HMD synthesizes and presents computer graphics image data for providing guidance for a method of the fault repair work, with an image of real space, based on the information regarding the fault repair work. 1. An information processing system comprising a sense of augmented reality presentation apparatus that is able to synthesize and display an image of real space and computer graphics image data , and a failure analysis apparatus that is able to analyze a fault having occurred in a computer system , wherein the failure analysis apparatus comprises:obtaining means that obtains information associated with an operational status of the computer system;determination means that determines information regarding fault repair work for the computer system, based on the information associated with the operational status obtained by the obtaining means; andtransmission means that transmits the information regarding the fault repair work determined by the determination means, to the sense of augmented reality presentation apparatus,the sense of augmented reality presentation apparatus comprises:presentation means that synthesizes and presents computer graphics image data for providing guidance for a method of the fault repair work, with the image of the real space, based on the information regarding the fault repair work, andin the failure analysis apparatus:after the fault repair work according to the guidance presented by the sense of augmented reality presentation apparatus, if the information associated with the operational status of the computer system is newly obtained by the obtaining means, the information regarding the fault repair work for the computer ...

Подробнее
16-05-2013 дата публикации

Optical pattern projection

Номер: US20130120841A1
Принадлежит: PRIMESENSE LTD

Optical apparatus includes first and second diffractive optical elements (DOEs) arranged in series to diffract an input beam of radiation. The first DOE is configured to apply to the input beam a pattern with a specified divergence angle, while the second DOE is configured to split the input beam into a matrix of output beams with a specified fan-out angle. The divergence and fan-out angles are chosen so as to project the radiation onto a region in space in multiple adjacent instances of the pattern.

Подробнее
16-05-2013 дата публикации

Apparatus and Method for Driving Touch Sensor

Номер: US20130124140A1
Принадлежит: LG DISPLAY CO., LTD.

Disclosed herein is an apparatus and method for driving a touch sensor, which is capable of improving touch sensitivity and accuracy. The touch sensor driving apparatus includes a touch sensor; a read-out circuit; and a signal processor configured to compare raw data from the read-out circuit with a predetermined primary reference value and secondary reference value so as to determine whether the touch node has been touched, wherein the signal processor collects the raw data of each touch node or each channel during a plurality of frames and resets and updates the secondary reference value of each touch node or each channel using the collected raw data. 1. A touch sensor driving apparatus , comprising:a touch sensor;a read-out circuit configured to drive the touch sensor, detect raw data of each touch node using each read-out signal received from the touch sensor, and output the raw data; anda signal processor configured to compare the raw data from the read-out circuit with a predetermined primary reference value and secondary reference value so as to determine whether the touch node has been touched and calculate and output touch coordinates corresponding the touch node which has been touched,wherein the signal processor collect the raw data of each touch node or each channel during a plurality of frames and reset and update the secondary reference value of each touch node or each channel using the collected raw data.2. The touch sensor driving apparatus of claim 1 , wherein the signal processor includes:a touch determination unit configured to determine whether the touch node has been touched and reset the secondary reference value;a touch coordinate calculator configured to calculate the touch coordinates; andan interface configured to enable the output of the touch coordinates.3. The touch sensor driving apparatus of claim 2 , wherein the touch determination unit resets the secondary reference value if a power supply is turned on and/or if the touch sensor has ...

Подробнее
30-05-2013 дата публикации

GESTURE RECOGNITION APPARATUS, METHOD THEREOF AND PROGRAM THEREFOR

Номер: US20130135192A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

A gesture recognition apparatus detects a locus of a fingertip position of a user from an acquired moving image; sets an effective range configured to set an effective range to detect a locus of the fingertip position of the user from the moving image flap action; determines whether or not the locus of the fingertip position is of the flap action when the locus of the fingertip position is included in the effective range; and recognizes a gesture of the user from the flap action when the locus of the fingertip position is of the flap action. 1. A gesture recognition apparatus comprising:an image acquiring unit configured to acquire a moving image;a detecting unit configured to detect a locus of a fingertip position of a user from the moving image;a setting unit configured to set an effective range to detect a locus of the fingertip position of the user from the moving image flap action;a determining unit configured to determine whether or not the locus of the fingertip position is of a flap action when the locus of the fingertip position is included in the effective range; anda recognizing unit configured to recognize a gesture of the user flap action when the locus of the fingertip position is of the flap action.2. The apparatus according to claim 1 , further comprising a display unit configured to display the moving image and the effective range in a superimposed manner.3. The apparatus according to claim 1 , wherein the setting unit sets also a first direction vector which indicates the flap action in addition to the effective range claim 1 , andthe determining unit determines that the locus of the fingertips position is the flap action when the locus is included in the effective range and an angle formed between a second vector indicated by the locus of the fingertip position and the first direction vector is smaller than a determination angle.4. The apparatus according to claim 1 , further comprising:a managing unit configured to send loci of fingertip ...

Подробнее
30-05-2013 дата публикации

Image reproducer, image reproduction method, and data structure

Номер: US20130135311A1
Принадлежит: Toshiba Corp

According to one embodiment, an image reproducer includes: a viewpoint position acquisition module configured to acquire a viewpoint of a viewer with respect to a display surface of a display; an image data acquisition module configured to acquire image data including an image to be displayed and disposition information indicating a three-dimensional position of the image with respect to the viewer when the image is displayed to the viewer; a pixel value calculator configured to calculate a pixel value on the display surface corresponding to a pixel of the acquired image such that the acquired image is displayed at the acquired position based on the acquired viewpoint of the viewer and the acquired position; and a display controller configured to control the display to display the acquired image based on the calculated pixel value.

Подробнее
06-06-2013 дата публикации

GESTURE INPUT METHOD AND SYSTEM

Номер: US20130141327A1
Принадлежит: WISTRON CORP.

A gesture input method is provided. The method is used in a gesture input system to control a content of a display. The method includes: capturing, by a first image capturing device, a hand of a user and generating a first grayscale image; capturing, by a second image capturing device, the hand of the user and generating a second grayscale image; detecting, by an object detection unit, the first and second grayscale images to obtain a first imaging position and a second imaging position corresponding to the first and second grayscale images, respectively; calculating, by a triangulation unit, a three-dimensional space coordinate of the hand according to the first imaging position and the second imaging position; recording, by a memory unit, a motion track of the hand formed by the three-dimensional space coordinate; and recognizing, by a gesture determining unit, the motion track and generating a gesture command. 1. A gesture input method , used in a gesture input system to control a content of a display , wherein the gesture input system comprises a first image capturing device , a second image capturing device , an object detection unit , a triangulation unit , a memory unit , a gesture determining unit , and a display , the gesture input method comprising:capturing, by the first image capturing device, a hand of a user and generating a first grayscale image;capturing, by the second image capturing device, the hand of the user and generating a second grayscale image;detecting, by the object detection unit, the first and second grayscale images to obtain a first imaging position and a second imaging position corresponding to the first and second grayscale images, respectively;calculating, by the triangulation unit, a three-dimensional space coordinate of the hand according to the first imaging position and the second imaging position;recording, by the memory unit, a motion track of the hand formed by the three-dimensional space coordinate; andrecognizing, by the ...

Подробнее
06-06-2013 дата публикации

DIGITAL IMAGE PROCESSING APPARATUS AND DIGITAL PHOTOGRAPHING APPARATUS INCLUDING THE SAME

Номер: US20130141613A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A digital image processing apparatus includes a storage unit that stores an image, a display unit that displays a stored image, a sensor that senses a motion of a user and generates a sensing signal, and a control unit that controls display of relevant information about an image displayed according to the sensing signal. 1. A digital image processing apparatus , comprising:a storage unit that stores an image;a display unit that displays a stored image;a sensor that senses a motion of a user and generates a sensing signal; anda control unit that controls display of relevant information about an image displayed according to the sensing signal.2. The digital image processing apparatus of claim 1 , wherein the display unit comprises:a main display unit that displays a reproduction image; andan auxiliary display unit that displays the relevant information.3. The digital image processing apparatus of claim 2 , wherein the main display unit and the auxiliary display unit are arranged to face opposite directions.4. The digital image processing apparatus of claim 2 , wherein claim 2 , when the digital image processing apparatus is rotated such that directions that the main display unit and the auxiliary display unit face are switched claim 2 , the control unit controls the relevant information to be displayed on the auxiliary display unit.5. The digital image processing apparatus of claim 4 , wherein the sensor is a gyro sensor that senses a motion of the digital image processing apparatus.6. The digital image processing apparatus of claim 4 , wherein a type of the relevant information displayed on the auxiliary display unit varies according to a rotation direction of the digital image processing apparatus.7. The digital image processing apparatus of claim 4 , wherein a characteristic of the sensing signal varies according to a rotation direction of the digital image processing apparatus.8. The digital image processing apparatus of claim 1 , wherein the display unit ...

Подробнее
06-06-2013 дата публикации

Real Time Assessment During Interactive Activity

Номер: US20130144537A1
Принадлежит: Neuro Analytics and Tech LLC

A solution for adjusting an interactive activity is provided. While a person is engaged in an instance of the interactive activity, direct measurement data corresponding to the person is received. The direct measurement data is used to assess at least one aspect of a response of the person to the instance of the interactive activity. Assessment data corresponding to the at least one aspect of the response is provided for use in adjusting at least one aspect of the interactive activity.

Подробнее
13-06-2013 дата публикации

CAMERA-BASED MULTI-TOUCH INTERACTION APPARATUS, SYSTEM AND METHOD

Номер: US20130147711A1

An apparatus, system and method controls and interacts within an interaction volume within a height over the coordinate plane of a computer such as a computer screen, interactive whiteboard, horizontal interaction surface, video/web-conference system, document camera, rear-projection screen, digital signage surface, television screen or gaming device, to provide pointing, hovering, selecting, tapping, gesturing, scaling, drawing, writing and erasing, using one or more interacting objects, for example, fingers, hands, feet, and other objects, for example, pens, brushes, wipers and even more specialized tools. The apparatus and method be used together with, or even be integrated into, data projectors of all types and its fixtures/stands, and used together with flat screens to render display systems interactive. The apparatus has a single camera covering the interaction volume from either a very short distance or from a larger distance to determine the lateral positions and to capture the pose of the interacting object(s). 1. An apparatus for determining a position or posture or both of at least one , wherein the object is in whole or partly located within an interaction volume delimited by an interaction surface and by a certain height range in a height dimension over said interaction surface , comprisingcamera;a mirror arrangement comprising one or more mirror sections;a computational unit for the computation of position and posture or both of at least one object based on information from the camera inter alia;wherein the camera is arranged to include both the volume and the mirror arrangement within the camera's field-of-view;the mirror arrangement, where the one or more mirror sections comprises at least one off-axis concave substantially parabolic optical mirror element at the plane of the interaction surface and where each off-axis substantially parabolic optical mirror element is arranged with its focal point at the camera's entrance pupil and its axis parallel ...

Подробнее
13-06-2013 дата публикации

MOBILE TERMINAL AND CONTROLLING METHOD THEREOF

Номер: US20130147793A1
Принадлежит:

A mobile terminal and controlling method thereof are disclosed, which is suitable for providing visual effect in accordance with a shift of a pointer in a mobile terminal capable of displaying a stereoscopic 3D image and controlling functions of the mobile terminal. The present invention includes displaying at least one selection target object having a predetermined 3D depth given thereto on a display unit including a binocular disparity generating means, detecting a distance and position of a pointer from the mobile terminal via a detecting unit, controlling a prescribed visual effect to be displayed on the display unit in response to the detected distance and position of the pointer, and if the distance of the pointer is equal to or smaller than a 1threshold and a specific selection target object is present at the position, activating a function corresponding to the specific selection target object. 1. A mobile terminal comprising:a display to display at least one selection object having a perceived three-dimensional (3D) depth;a detecting unit to detect a distance of a pointer from the mobile terminal and to detect a position of the pointer relative to the mobile terminal; anda controller to control a prescribed visual effect to be displayed on the display based on the detected distance and the detected position of the pointer, and when the distance of the pointer is determined to be equal to or less than a first threshold and the detected position of the pointer corresponds to a specific selection object, the controller to activate a function corresponding to the specific selection object.2. The mobile terminal of claim 1 , wherein the distance of the pointer is a distance between the pointer and the mobile terminal claim 1 , and the position of the pointer includes coordinates in a plane that is parallel with a screen of the display.3. The mobile terminal of claim 1 , wherein the detecting unit includes at least one proximity sensor and a camera claim 1 , the ...

Подробнее
13-06-2013 дата публикации

Enhanced perception of multi-dimensional data

Номер: US20130151992A1
Принадлежит: Raytheon Co

A system for analyzing multi-dimensional data maps the multi-dimensional data to visual attributes and aural attributes. It displays a subset of the multidimensional data set on a display unit. The system further displays an avatar on the display unit. The avatar can select a field of view of the displayed subset. The system receives input from a user, wherein the user input relates to an additional dimension subset of the multidimensional data set that is not currently displayed. Visual attributes and/or aural attributes relating to the additional dimension subset are generated as a function of the input from a user. The visual and/or aural attribute convey information relating to the additional dimension subset on the display unit.

Подробнее
04-07-2013 дата публикации

CONTROL OF A WEARABLE DEVICE

Номер: US20130169536A1
Принадлежит: OrCam Technologies Ltd.

A wearable device including a camera and a processor and a control interface between the wearable device and a user of the wearable device. An image frame is captured from the camera. Within the image frame, an image of a finger of the user is recognized. The recognition of the finger by the wearable device controls the wearable device. 1. A method for interfacing between a wearable device and a user of the wearable device , the device including a camera and a processor connectible thereto , the method comprising:capturing an image frame from the camera; andwithin the image frame, recognizing an image of a finger of the user thereby controlling the wearable device.2. The method of claim 1 , wherein said recognizing is performed by using an appearance-based classifier.3. The method of claim 2 , further comprising:previously training said appearance-based classifier on at least one training set of images selected from the group consisting of: images of a plurality of fingers and a plurality of images of the finger of the user.4. The method of claim 1 , wherein said recognizing is performed from information in a single image frame.5. The method of claim 1 , wherein said recognizing is performed while said camera immobile.6. The method of claim 1 , further comprising:upon said recognizing, providing confirmation to the user that said finger is recognized.7. The method of claim 1 , wherein said recognizing a finger includes said recognizing two fingers selected from the group consisting of: an index finger and thumb claim 1 , an index finger and middle finger and a thumb and pinky finger.8. The method of claim 1 , upon said recognizing claim 1 , searching in the vicinity of the image of the finger for text.9. The method of claim 1 , upon said recognizing claim 1 , searching in the vicinity of the image of the finger for an image of an object selected from the group consisting of: a vehicle claim 1 , a newspaper claim 1 , a signpost claim 1 , a notice claim 1 , a book ...

Подробнее
04-07-2013 дата публикации

Customization based on physiological data

Номер: US20130173413A1
Принадлежит: adidas AG

Product and/or item customization based on physiological data is described. One or more sensors worn by a person or a group of people and/or attached with one or more items may generate sensor data. Data characterizing one or more physiological attributes of the person or group may be determined based at least in part on the generated sensor data. Item customization may include a customized design. For example, the customized design may include one or more personalized graphics generated based on the physiological data, and/or one or more customized item components with parameters set based on the physiological data. The customized design may be provided to an item customization facility for presentation, assembly and/or manufacture. Item customization may further include selecting for consideration a set of matching items based at least in part on the physiological data.

Подробнее
04-07-2013 дата публикации

ELECTRONIC APPARATUS AND METHOD OF CONTROLLING THE SAME

Номер: US20130174101A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An electronic apparatus and a method of controlling the electronic apparatus are provided. The method includes: receiving a two hand start command which is to perform a motion task using two hands; if the two hand start command is input, changing a mode of the electronic apparatus to a two hand task mode which is to perform the motion task using the two hands; and if the mode of the electronic apparatus is changed to the two hand task mode, displaying a two hand input guide graphical user interface (GUI) which is to perform the motion task using the two hands. Therefore, a user further intuitively and conveniently perform a function of the electronic apparatus, such as a zoom-in/zoom-out, by using two hands. 1. A method of controlling an electronic apparatus , the method comprising:receiving input indicating a motion of a user;if the received input indicates the motion of the user, changing the electronic apparatus to a two hand task mode in which a two-hand motion is input for performing a corresponding task and in which a graphical user interface (GUI) guide is provided to guide with the two-hand motion input.2. The method of claim 1 , wherein the receiving the input comprises:if the received input is input via one hand, changing the mode of the electronic apparatus to a motion task mode which is to perform a motion task; andif the received input is further input using the other hand when the electronic apparatus is in the motion task mode, the received input indicates the two-hand mode and the GUI guide is provided.3. The method of claim 1 , wherein the receiving the input comprises:receiving a shake motion in which two hands of a user shake a plurality of times, and determining that the received input indicates the two-hand task mode based on the received shake motion.4. The method of claim 1 , wherein the corresponding task is a task of magnifying or reducing a display screen.5. The method of claim 4 , further comprising:if a motion of moving the two hands away ...

Подробнее
11-07-2013 дата публикации

IMPLANTED DEVICES AND RELATED USER INTERFACES

Номер: US20130176207A1
Принадлежит: AUTODESK, INC.

Embodiments of the invention generally relate to electronic devices capable of being implanted beneath the skin of a human user. The electronic devices include input devices for receiving input from a user, and output devices for output signals or information to a user. The electronic devices may optionally include one or more sensors, batteries, memory units, and processors. The electronic devices are protected by a protective packaging to reduce contact with bodily fluids and to mitigate physiological responses to the implanted devices. 1. A device configured to be implanted beneath human skin , the device comprising:a processor;a first input device coupled to the processor and adapted to receive direct input from a user;a first output device coupled to the processor; anda protective packaging disposed around at least the first input device, the first output device, and the processor.2. The device of claim 1 , further comprising a second input device coupled to the processor.3. The device of claim 2 , further comprising a second output device coupled to the processor.4. The device of claim 1 , wherein the protective packaging comprises elastomeric silicone.5. The device of claim 4 , further comprising a port for coupling a charging cable to the device.6. The device of claim 5 , wherein the protective packaging includes an opening adjacent to the port.7. The device of claim 1 , further comprising a battery for supplying power to the device.8. The device of claim 7 , wherein the battery is rechargeable via capacitive charging.9. The device of claim 7 , wherein the battery comprises a lithium polymer battery.10. The device of claim 1 , wherein the first output device comprises a light emitting diode display for displaying information to a user.11. The device of claim 1 , wherein the first output device comprises an audio speaker.12. The device of claim 1 , wherein the first output device comprises a vibrating motor for indicating a direction to a user.13. The device ...

Подробнее
18-07-2013 дата публикации

Image Adjusting

Номер: US20130181892A1
Принадлежит: Nokia Oyj

An apparatus including a display configured to display an image; and a system for adjusting the image on the display based upon location of a user of the apparatus relative to the apparatus. The system for adjusting includes a camera and an orientation sensor. The system for adjusting is configured to use signals from both the camera and the sensor to determine the location of the user relative to the display.

Подробнее
18-07-2013 дата публикации

Augmented reality with sound and geometric analysis

Номер: US20130182858A1
Принадлежит: Qualcomm Inc

A method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed. The mobile device detects a target. A virtual object is initiated in the AR application. Further, the external sound is received, by at least one sound sensor of the mobile device, from a sound source. Geometric information between the sound source and the target is determined, and at least one response for the virtual object to perform in the AR application is generated based on the geometric information.

Подробнее
25-07-2013 дата публикации

INTRA-OPERATIVE IMAGE PRESENTATION ADAPTED TO VIEWING DIRECTION

Номер: US20130187955A1
Принадлежит: BRAINLAB AG

The invention relates to an intra-operative image presentation method, in which an image representation () of a branched body structure which has been graphically segmented from a medical image data set is presented on a display, in particular a monitor (), wherein the viewing situation of a person looking at the display and any changes in said viewing situation are determined, and the image representation is modified accordingly by adapting the image representation () to the changes in the viewing situation. The invention also relates to an intra-operative image presentation system, comprising a display, in particular a monitor (), on which an image representation () of a branched body structure which has been graphically segmented from a medical image data set is presented, wherein a tracking system () determines the viewing situation of a person looking at the display () and any changes in the viewing situation, and a graphic processor modifies the image representation () by adapting it to the determined changes in the viewing situation. 1. An intra-operative image presentation method , in which an image representation of a branched body structure which has been graphically segmented from a medical image data set is presented on a display , in particular a monitor , characterised in that the viewing situation of a person looking at the display and any changes in said viewing situation are determined , and the image representation is modified accordingly by adapting the image representation to the changes in the viewing situation.2. The method according to claim 1 , wherein the viewing situation includes the viewing direction claim 1 , and the image representation is modified by rotating it in accordance with a change in the viewing angle.3. The method according to claim 1 , wherein the viewing situation includes the viewing distance claim 1 , and the image representation is modified by being zoomed-in or zoomed-out in accordance with a change in the viewing ...

Подробнее
08-08-2013 дата публикации

Method and system for providing a modified display image augmented for various viewing angles

Номер: US20130201099A1
Принадлежит: ORTO Inc

An image augmentation method for providing a modified display image to compensate for an oblique viewing angle by measuring a viewing position of a viewer relative to a display screen; calculating a three-dimensional position of the viewer relative to the display screen; calculating an angular position vector of the viewer relative to the display screen; generating a rotation matrix as a function of the angular position vector; calculating a set of perimeter points; generating a modified image as a function of a normal image and the previously calculated perimeter points; and rendering the modified image on the display screen.

Подробнее
08-08-2013 дата публикации

BOOK OBJECT FOR AUGMENTED REALITY

Номер: US20130201185A1
Автор: Kochi Masami
Принадлежит: Sony Computer Entertainment Europe Ltd.

A method for interfacing with an interactive program is provided. The method includes: capturing images of first and second pages of a book object, the first and second pages being joined along a fold axis defined along a spine of the book; analyzing the captured images to identify a first tag on the first page and a second tag on the second page; tracking movement of the first and second pages by tracking the first and second tags, respectively; generating augmented images by replacing, in the captured images, the book object with a virtual book, the virtual book having a first virtual page corresponding to the first page of the book object, the virtual book having a second virtual page corresponding to the second page of the book object; rendering first and second scenes on the first and second virtual pages, respectively; and presenting the augmented images on a display. 1. A method for interfacing with an interactive program , comprising:capturing images of first and second pages of a book object, the first and second pages being joined substantially along a fold axis defined along a spine of the book;analyzing the captured images to identify a first tag on the first page and a second tag on the second page;tracking movement of the first and second pages by tracking the first and second tags, respectively;generating augmented images by replacing, in the captured images, the book object with a virtual book, the virtual book having a first virtual page corresponding to the first page of the book object, the virtual book having a second virtual page corresponding to the second page of the book object, wherein movement of the first and second pages of the book object controls movement of the first and second virtual pages of the virtual book, respectively;rendering first and second scenes on the first and second virtual pages, respectively, the first or second scene defining an animation; andpresenting the augmented images on a display.2. The method of claim 1 , ...

Подробнее
15-08-2013 дата публикации

METHOD AND APPARATUS FOR PRESENTING AN OPTION

Номер: US20130207888A1
Автор: Jin Sheng
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS N.V.

A method of presenting an option comprises: calculating a movement range of an object; calculating an area of a display device based on the movement range; and presenting at least one option in the area of the display device. 1. An apparatus for presenting an option , comprising:a calculation module adapted to calculate a movement range of an object and calculate an area of a display device, based on the movement range; anda presentation module adapted to present at least one option in the area of the display device.2. The apparatus of claim 1 , wherein the apparatus further comprises a reception module adapted to receive position information of the object claim 1 , the calculation module is further adapted to calculate the movement range claim 1 , based on the received position information of the object.3. The apparatus of claim 1 , wherein the apparatus further comprises a processing module adapted to perform image processing on images including position information of the object to obtain the position information of the object claim 1 , the calculation module is further adapted to calculate the movement range claim 1 , based on the obtained position information of the object.4. The apparatus of claim 3 , wherein the images are picked up by an image pickup means.5. The apparatus of claim 2 , whereinthe calculation module is further adapted to calculate a movement trajectory of the object, based on the position information of the object, and calculate the movement range, based on the calculated movement trajectory of the object.6. The apparatus of claim 1 , whereinthe object includes a marker worn on a hand of a person.7. The apparatus of claim 4 , whereinthe image pickup means includes an infrared camera.8. The apparatus of claim 1 , wherein the apparatus further comprises:a judgment module adapted to judge whether or not a pointer indicating the object in the display device remains on one option of the at least one option for a predefined period; anda ...

Подробнее
15-08-2013 дата публикации

System and Method of Biomechanical Posture Detection and Feedback Including Sensor Normalization

Номер: US20130207889A1
Принадлежит: LUMO Bodytech, Inc.

A system and method are described herein for a sensor device which biomechanically detects in real-time a user's movement state and posture and then provides real-time feedback to the user based on the user's real-time posture. The feedback is provided through immediate sensory feedback through the sensor device (e.g., a sound or vibration) as well as through an avatar within an associated application with which the sensor device communicates. The sensor device detects the user's movement state and posture by capturing data from a tri-axial accelerometer in the sensor device. Streamed data from the accelerometer is normalized to correct for sensor errors as well as variations in sensor placement and orientation. Normalization is based on accelerometer data collected while the user is wearing the device and performing specific actions. 1. A method of normalizing accelerometer data comprising:capturing, by a tri-axial accelerometer attached to a user, first tri-axial accelerometer data over a first period of time while the user is moving as instructed;averaging, by a microprocessor, the captured first tri-axial accelerometer data;creating, by the microprocessor, a first normalization matrix based on the averaged first tri-axial accelerometer data;capturing, by the tri-axial accelerometer attached to the user, second tri-axial accelerometer data at a second point in time; andcreating, by the microprocessor, first normalized accelerometer data by applying the first normalization matrix to the captured second tri-axial accelerometer data.2. The method of further comprising claim 1 , before averaging by the microprocessor the captured first tri-axial accelerometer data:calculating, by the microprocessor, smoothed power from the captured first tri-axial accelerometer data;identifying, by the microprocessor, data in the captured first tri-axial accelerometer data which results in the calculated power being below a threshold; anddiscarding, by the microprocessor, the ...

Подробнее
22-08-2013 дата публикации

APPARATUS SYSTEM AND METHOD FOR HUMAN-MACHINE-INTERFACE

Номер: US20130215028A1
Автор: Givon Dor
Принадлежит: EXTREME REALITY LTD.

There is provided a 3D human machine interface (“3D HMI”), which 3D HMI may include: (1) an image acquisition assembly, (2) an initializing module, (3) an image segmentation module, (4) a segmented data processing module, (5) a scoring module, (6) a projection module, (7) a fitting module, (8) a scoring and error detection module, (9) a recovery module, (10) a three dimensional correlation module, (11) a three dimensional skeleton prediction module, (12) an output module and (13) a depth extraction module. 1. A human machine interface comprising: a. an image sensor array having an image sensing area; and', 'b. two or more optical paths directing optical image information of the user onto an at least partially overlapping segment of said image sensing area;, 'an image acquisition assembly to acquire a series of substantially consecutive sets of two-dimensional image data of a user via multiple optical paths, said image acquisition assembly comprisinga first processing unit to: (1) derive, based on the optical image information of the user from two or more optical paths, estimated three dimensional coordinates of elements of the user body, during the acquisition of at least two of the substantially consecutive sets of two-dimensional image data; (2) determine a movement of one or more body parts of the user between the acquisition of the at least two of the substantially consecutive sets of two-dimensional image data, based on a difference between the estimated three dimensional coordinates of the elements of the user body, during the acquisition of each of the at least two of the substantially consecutive sets of two-dimensional image data; anda second processing unit to: (1) correlate the determined movement of one or more body parts of the user to a user input; and (2) transmit a signal representing the user input.2. The human machine interface according to claim 1 , wherein said image sensor array is a webcam.3. The human machine interface according to claim 1 , ...

Подробнее
22-08-2013 дата публикации

INTERACTIVE INPUT SYSTEM HAVING A 3D INPUT SPACE

Номер: US20130215148A1
Принадлежит: SMART TECHNOLOGIES ULC

An interactive input system comprises computing structure; and an input device detecting at least one physical object carrying a recognizable pattern within a three-dimensional (3D) input space and providing output to the computing structure, wherein the computing structure processes the output of the input device to: recognize the pattern carried by the at least one physical object in the 3D input space; and modify an image presented on a display surface by applying a transition to digital content associated with the at least one physical object based on a detected state of the at least one physical object. 1. An interactive input system comprising:computing structure; and recognize the pattern carried by the at least one physical object in the 3D input space; and', 'modify an image presented on a display surface by applying a transition to digital content associated with the at least one physical object based on a detected state of the at least one physical object., 'an input device detecting at least one physical object carrying a recognizable pattern within a three-dimensional (3D) input space and providing output to said computing structure, wherein said computing structure processes the output of the input device to2. The interactive input system of claim 1 , wherein the computing device processes the output of the input device to detect at least one of orientation claim 1 , position and movement of the at least one physical object in the 3D input space.3. The interactive input system of claim 1 , wherein when the detected state of the at least one physical object signifies that a new recognizable physical object is positioned in the 3D input space claim 1 , the image is modified to add associated digital content to the image claim 1 , the added associated digital content appearing in the image using a visual effect.4. The interactive input system of claim 1 , wherein when the detected state of the at least one physical object signifies removal of a previously ...

Подробнее
22-08-2013 дата публикации

Stereo image processing device and stereo image processing method

Номер: US20130215232A1
Принадлежит: Panasonic Corp

An image segmenting unit ( 401 ) in the stereo image processing device ( 100 ) extracts M (a natural number between 2 and N, inclusive) number of segmented target images wherein a first partial area within a target image has been segmented into N (a natural number of 2 or more), and also extracts M number of segmented reference images wherein a second partial area within a reference image has been segmented into N. An image concatenating unit ( 402 ) serially concatenates M data strings, each comprising a intensity value from each segmented target image, to form a first image data string and also serially concatenates M data strings, each comprising a intensity value from each segmented reference image, to form a second image data string. A filtering unit ( 403 ) and a peak position detection unit ( 104 ) calculate the disparity between the standard images and the reference images.

Подробнее
29-08-2013 дата публикации

GESTURE RECOGNITION DEVICE AND METHOD THEREOF

Номер: US20130222232A1
Принадлежит: Pantech Co., Ltd.

A device and a method having a gesture recognition operation at a distance are provided. The device to recognize a gesture of a user includes an image capture unit to capture a gesture to acquire image information, a control unit to determine a distance between the device and the user based on the image information, and to determine a mode of the device according to the determined distance. The method for recognizing a gesture for a device includes capturing a gesture of a user as image information, determining a distance between the device and the user based on the image information, and determining a mode of operation according to the determined distance. 1. A device to recognize a gesture of a user , the device comprising:an image capture unit to capture a gesture to acquire image information;a control unit to determine a distance between the device and the user based on the image information, and to determine a mode of the device according to the determined distance.2. The device of claim 1 , wherein the image capture unit captures the gesture in a capture-based gesture recognition mode.3. The device of claim 2 , wherein the device enters the capture-based gesture recognition mode according to at least one of establishing a connection to another device claim 2 , mounting the device on a device rest claim 2 , and an execution of at least one of a gallery application claim 2 , a music player application claim 2 , a call reception application claim 2 , an internet browser application claim 2 , a roadview application claim 2 , and a digital multimedia broadcast application.4. The device of claim 1 , wherein the control unit comprises:a distance determination unit to determine whether to operate the device in a short distance mode or a long distance mode.5. The device of claim 4 , wherein the distance determination unit determines to operate in the short distance mode if the distance determination unit determines that the image information includes only a face of the ...

Подробнее
29-08-2013 дата публикации

Expanded 3d space-based virtual sports simulation system

Номер: US20130225305A1

An expanded 3D space-based virtual sports simulation system is provided. The expanded 3D space-based virtual sports simulation system includes: a plurality of user tracking devices configured to track a user's body motion; a first display device configured to display a first image including content; a second display device configured to display a second image including an image of the user's body motion tracked through the user tracking devices; and a control unit configured to set image display spaces of the respective display devices such that physical spaces for displaying an image including a 3D image are divided or shared among the respective display devices, and to provide images to the respective display devices according to a scenario.

Подробнее
05-09-2013 дата публикации

TEXTILE INTERFACE DEVICE AND METHOD FOR USE WITH HUMAN BODY-WORN BAND

Номер: US20130229338A1

Disclosed herein is a textile interface device and method for use with a human body-worn band. The textile interface device includes a detection unit provided in wearing means that is worn on a human body and configured to detect bio-signals from the human body, an interface unit disposed inside accommodation means provided on one side of the wearing means and configured to communicate with an electronic device accommodated in the accommodation means, and a plurality of textile buttons configured to generate control signals adapted to control the electronic device. The interface unit includes, on a textile, an optical reception unit configured to receive optical signals, an optical transmission unit configured to send the optical signals, and a light diffusion unit configured to diffuse light when the optical signals are sent. The interface unit communicates with the electronic device by means of light. 1. A textile interface device for use with a human body-worn band , comprising:a detection unit provided in wearing means that is worn on a human body, and configured to detect bio-signals from the human body;an interface unit disposed inside accommodation means provided on one side of the wearing means, and configured to communicate with an electronic device accommodated in the accommodation means; anda plurality of textile buttons configured to generate control signals adapted to control the electronic device;wherein the interface unit comprises, on a textile, an optical reception unit configured to receive optical signals, an optical transmission unit configured to send the optical signals, and a light diffusion unit configured to diffuse light when the optical signals are sent, and the interface unit communicates with the electronic device by means of light; andwherein each of the textile buttons comprises a textile, an electrode provided on the textile in a button shape, and a ground configured to surround the electrode, a surplus portion of the ground being ...

Подробнее
12-09-2013 дата публикации

Portable Electronic Device and Method for Controlling Operation Thereof Based on User Motion

Номер: US20130234924A1
Принадлежит: MOTOROLA MOBILITY, INC.

A portable electronic device includes a motion sensor and a controller. The motion sensor detects an alternating signature motion of a limb of the user about a virtual axis corresponding to the limb. The motion sensor may be an accelerometer capable of detecting three dimensional acceleration. The accelerometer detects acceleration along X, Y and/or Z axes, in which acceleration peaks of the X and Z axes alternate with each other and acceleration of the Y axis remains substantially steady relative to the X and Y axes. The portable electronic device controls at least one function based on the detected alternating signature motion of the limb and/or acceleration along the X, Y and/or Z axes. 1. A method for controlling operation of a portable electronic device positioned adjacent to a limb of a user , the method comprising:detecting, by a motion sensor of the portable electronic device, an alternating signature motion of a limb of the user about a virtual axis corresponding to the limb; andcontrolling, by the portable electronic device, at least one function based on the detected alternating signature motion of the limb.2. The method of claim 1 , further comprising positioning the portable electronic device peripherally about the virtual axis corresponding to the limb before detecting the alternating signature motion of the limb.3. The method of claim 1 , further comprising detecting claim 1 , by the motion sensor claim 1 , a display positioning motion subsequent to the alternating signature motion claim 1 , the display positioning motion being associated with directing a display of the portable electronic device towards a view angle of the user.4. The method of claim 1 , wherein:the alternating signature motion of the limb includes at least two sets of signature motions within a predetermined time period; andeach set of signature motions includes rotating in a first direction about the virtual axis and rotating in a second direction opposite the first direction about ...

Подробнее
12-09-2013 дата публикации

ELECTRONIC DEVICE FOR MEASURING ANGLE OF FACE AND ROTATING SCREEN THEREOF AND METHOD THEREOF

Номер: US20130234927A1
Автор: Roh Young-Gil
Принадлежит: Samsung Electronics Co., Ltd

An electronic device is configured to measure an angle of a user's face and rotate a screen thereof. In a method, the electronic device verifies that a face of a user is included in photographed image information, recognizes the face of the user included in the image information, and rotates a screen of the electronic device according to an angle of the recognized face. 1. An operation method of an electronic device , the operation method comprising:verifying that a face of a user is included in photographed image information;recognizing the face of the user included in the image information; androtating a screen of the electronic device according to an angle of the recognized face.2. The operation method of claim 1 , further comprising:displaying a message about whether a smart rotation function is set;allowing the user to select a certain region included in the message; andverifying, that the smart rotation function is set.3. The operation method of claim 2 , further comprising verifying that the smart rotation function is set and stopping operations of a gyro sensor and an acceleration sensor.4. The operation method of claim 1 , further comprising:verifying that a smart rotation function is set; andphotographing an image in a predetermined direction at intervals of a predetermined time.5. The operation method of claim 2 , wherein the smart rotation function is a function for rotating the screen of the electronic device claim 2 , such that the user watches the screen thereof straight.6. The operation method set forth claim 3 , wherein the smart rotation function is a function for rotating the screen of the electronic device claim 3 , such that the user watches the screen thereof straight.7. The operation method of claim 1 , wherein the recognition of the face of the user included in the image information comprises verifying positions of eyes claim 1 , the nose claim 1 , and the mouth of the user.8. The operation method of claim 1 , wherein the rotation of the ...

Подробнее
12-09-2013 дата публикации

COHERENT PRESENTATION OF MULTIPLE REALITY AND INTERACTION MODELS

Номер: US20130234933A1
Автор: Reitan Dan
Принадлежит: REINCLOUD CORPORATION

A method for navigating concurrently and from point-to-point through multiple reality models is described. The method includes: generating, at a processor, a first navigatable virtual view of a first location of interest, wherein the first location of interest is one of a first virtual location and a first non-virtual location; and concurrently with the generating the first navigatable virtual view of the first location of interest, generating, at the processor, a second navigatable virtual view corresponding to a current physical position of an object, such that real-time sight at the current physical position is enabled within the second navigatable virtual view. 1. A computer usable storage medium having instructions embodied therein that when executed cause a computer system to perform a method for interpreting meaning of a dialogue between a plurality of agents , wherein said plurality of agents comprises at least one of one or more automatons and one or more humans , said method comprising:accessing, by a processor, a dialogue between said plurality of agents;accessing, by said processor, input associated with a behavior of said plurality of agents and an interaction between said plurality of agents;comparing, by said processor, received input to a script type; andbased on said comparing, determining, by said processor, a meaning of said dialogue.2. The computer usable storage medium of claim 1 , wherein said method further comprises:based on said determining said meaning, generating, at said processor, a response instruction.3. The computer usable storage medium of claim 2 , wherein said generating a response instruction comprises:generating a response instruction that instructs a verbal response.4. The computer usable storage medium of claim 2 , wherein said generating a response comprises:generating a response instruction that instructs a non-verbal response.5. The computer usable storage medium of claim 1 , wherein said accessing a dialogue between said ...

Подробнее
19-09-2013 дата публикации

METHOD FOR PROVIDING HUMAN INPUT TO A COMPUTER

Номер: US20130241823A1
Автор: Pryor Timothy R.
Принадлежит: Apple Inc.

The invention provides a method for providing human input to a computer which allows a user to interact with a display connected to the computer. The method includes the steps of placing a first target on a first portion of the user's body, using an electro-optical sensing means, sensing data related to the location of the first target and data related to the location of a second portion of the user's body, the first and second portions of the user's body being movable relative to each other, providing an output of the electro-optical sensing means to the input of the computer, determining the location of the first target and the location of the second portion of the user's body, and varying the output of the computer to the display based upon the determined locations for contemporaneous viewing by the user. 1. A system for performing operations based on detected targets , comprising:a sensing device configured for collecting data on one or more targets; and receiving the data from the sensing device,', 'determining a location of each of the one or more targets, and', 'performing an operation based on the location of the targets and their relationship to each other., 'a computing device in communication with the sensing device, the computing device capable of'}2. The system of claim 1 , wherein each target is configured for attachment to a user.3. The system of claim 1 , wherein the computing device is further capable of determining a spatial relationship between the one or more targets and performing the operation based on the determined spatial relationship.4. The system of claim 2 , wherein the computing device is further capable of identifying one or more user body parts to which a target has been attached.5. The system of claim 4 , wherein the computing device is further capable of compiling corresponding position and motion data of the one or more identified user body parts.6. The system of claim 5 , wherein the computing device is further capable of ...

Подробнее
19-09-2013 дата публикации

SYSTEM FOR FAST, PROBABILISTIC SKELETAL TRACKING

Номер: US20130243255A1
Принадлежит: MICROSOFT CORPORATION

A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system. The system includes one or more experts for proposing one or more skeletal hypotheses each representing a user pose within a given frame. Each expert is generally computationally inexpensive. The system further includes an arbiter for resolving the skeletal hypotheses from the experts into a best state estimate for a given frame. The arbiter may score the various skeletal hypotheses based on different methodologies. The one or more skeletal hypotheses resulting in the highest score may be returned as the state estimate for a given frame. It may happen that the experts and arbiter are unable to resolve a single state estimate with a high degree of confidence for a given frame. It is a further goal of the present system to capture any such uncertainty as a factor in how a state estimate is to be used. 1. In a system including a computing environment coupled to a capture device for capturing image data from a field of view of the capture device , the image data representing a position of a user , a method of estimating user body position comprising:(a) receiving image data from the field of view;(b) applying one or more computer models for generating body part proposals from the image data; and(c) analyzing the one or more computer models produced in said step (b) by one or more methodologies to choose at least one computer model of the one or more computer models estimated to provide the best body part proposal.2. The method of claim 1 , further comprising the step (d) of generating a confidence level in the one or more computer models estimated to be the best representation of the state information.3. The method of claim 1 , said step (b) of applying one or more computer models comprising the step of applying one or more computer models based on the image data from the field of view captured in a current frame.4. The method of claim 3 , said of applying one or ...

Подробнее
10-10-2013 дата публикации

SYSTEM AND METHOD FOR COMBINING THREE-DIMENSIONAL TRACKING WITH A THREE-DIMENSIONAL DISPLAY FOR A USER INTERFACE

Номер: US20130265220A1
Принадлежит: Omek Interactive, Ltd.

Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hand and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using a three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device. 1. A method comprising:acquiring depth data of a subject with a depth sensor;tracking the subject's movements using the acquired depth data;causing to be displayed in a three-dimensional display the subject's movements.2. The method of claim 1 , wherein the acquired depth data of the subject includes at least one of the subject's hands.3. The method of claim 2 , wherein movements of the at least one of the subject's hands are mapped to a first virtual object claim 2 , and corresponding movements of the first virtual object are shown in the three-dimensional display.4. The method of claim 1 , further comprising causing to be displayed in the three-dimensional display a second virtual object claim 1 , wherein the subject interacts with the second virtual object through the subject's movements claim 1 , and further wherein interaction of the subject with the second virtual object permits the subject to interact with an electronic device.5. The method of claim 4 , wherein at least some of the subject's movements occlude the second virtual object in the three-dimensional display.6. The method of claim 1 , wherein tracking the subject's movements comprises:identifying a plurality of features of the subject in the acquired depth data;obtaining three-dimensional positions corresponding to the identified plurality of features from the depth data; ...

Подробнее
17-10-2013 дата публикации

METHOD AND APPARATUS FOR DETECTING TALKING SEGMENTS IN A VIDEO SEQUENCE USING VISUAL CUES

Номер: US20130271361A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method and system for detecting temporal segments of talking faces in a video sequence using visual cues. The system detects talking segments by classifying talking and non-talking segments in a sequence of image frames using visual cues. The present disclosure detects temporal segments of talking faces in video sequences by first localizing face, eyes, and hence, a mouth region. Then, the localized mouth regions across the video frames are encoded in terms of integrated gradient histogram (IGH) of visual features and quantified using evaluated entropy of the IGH. The time series data of entropy values from each frame is further clustered using online temporal segmentation (K-Means clustering) algorithm to distinguish talking mouth patterns from other mouth movements. Such segmented time series data is then used to enhance the emotion recognition system. 1. A method for detecting and classifying talking segments of a face in a visual cue in order to infer emotions , the method comprising:normalizing and localizing a face region for each frame of the visual cue;obtaining a histogram of structure descriptive features of the face for the frame in the visual cue;deriving an integrated gradient histogram (IGH) from the descriptive features for the frame in the visual cue;computing entropy of the IGH for the frame in the visual cue;performing segmentation of the IGH to detect talking segments for the face in the visual cues; andanalyzing the segments for the frame in the visual cues to infer emotions.2. The method of claim 1 , wherein the normalizing comprises employing pupil location to normalize a face image of the face for the frame of the visual cue.3. The method of claim 1 , wherein the localizing comprises employing nose location to crop a mouth region in an accurate manner for the frame of the visual cue.4. The method of claim 1 , wherein the deriving of the IGH comprises obtaining an uncertainty involved in the IGH representation for talking segments as compared ...

Подробнее
31-10-2013 дата публикации

VIRTUAL DESKTOP COORDINATE TRANSFORMATION

Номер: US20130285903A1
Принадлежит:

A computing system includes a depth image analysis module to track a world-space pose of a human in a fixed, world-space coordinate system. The computing system further includes an interaction module to establish a virtual interaction zone with an interface-space coordinate system that tracks the human. The computing system also includes a transformation module to transform a position defined in the fixed, world-space coordinate system to a position defined in the interface-space coordinate system. 1. A method of providing a virtual desktop , the method comprising:tracking a world-space pose of a human;establishing a virtual interaction zone with an interface-space coordinate system that tracks the human and moves relative to a fixed, world-space coordinate system; andtransforming a position defined in the fixed, world-space coordinate system to a position defined in the moveable, interface-space coordinate system.2. The method of claim 1 , further comprising outputting a display signal for displaying an interface element at a desktop-space coordinate corresponding to the position defined in the moveable claim 1 , interface-space coordinate system.3. The method of claim 1 , where transforming the position defined in the fixed claim 1 , world-space coordinate system to the position defined in the moveable claim 1 , interface-space coordinate system includes applying a transformation matrix to the position defined in the fixed claim 1 , world-space coordinate system.4. The method of claim 1 , where transforming the position defined in the fixed claim 1 , world-space coordinate system to the position defined in the moveable claim 1 , interface-space coordinate system includes modeling the position in a model-space coordinate system corresponding to the fixed claim 1 , world-space coordinate system and applying a transformation matrix to the position defined in the model-space coordinate system.5. The method of claim 1 , where the interface-space coordinate system is ...

Подробнее
31-10-2013 дата публикации

COMPUTER VISION BASED TWO HAND CONTROL OF CONTENT

Номер: US20130285908A1
Принадлежит:

A system and method for manipulating displayed content based on computer vision by using a specific hand posture. In one embodiment a mode is enabled in which content can be manipulated in a typically two handed manipulation (such as zoom and rotate). 121.-. (canceled)22. A method for computer vision based control of displayed content , the method comprisingobtaining images of a field of view;identifying within the images a user's hand;detecting a posture of the user's hand;detecting movement of the user's hand between a first image and a second image in the images of the field of view;according to the detected movement of the hand, moving a graphical element on a display; andif a change of posture of the hand is detected then not moving the graphical element on the display.23. The method of comprising detecting a change of posture of the user's hand by detecting a different posture of the user's hand24. The method of comprising detecting a change of posture of the user's hand by checking a transformation between the first and second image and detecting the change of posture based on the transformation.25. The method of wherein if the transformation is a non-rigid transformation then not moving the graphical element on the display and if the transformation is a rigid transformation then moving the graphical element on the display.26. The method according to wherein the posture of the user's hand comprises a palm with all fingers extended or a closed hand.27. The method according to wherein the different posture comprises a closed hand or a palm with all fingers extended.28. The method according to wherein the graphical element is a cursor.29. The method of comprising identifying the different hand posture and selecting an object on a display based on the detection of the different hand posture.30. The method of comprising dragging the object based on the detection of the different hand posture and based on movement of the hand in the different posture. The present ...

Подробнее
07-11-2013 дата публикации

TERMINAL AND METHOD FOR IRIS SCANNING AND PROXIMITY SENSING

Номер: US20130293457A1
Автор: YOON Sungjin
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method of iris scanning and proximity sensing includes: receiving selection information of an operation mode; sensing an iris including emitting light having an amount of light of a first level, and photographing an iris using the emitted light if a selected operation mode is a iris scanning mode; sensing a proximity including emitting light having an amount of light of a second level, and sensing information on whether an object has approached using the emitted light if the selected operation mode is a proximity sensing mode; and recognizing the iris using the photographed iris image, and performing a function according to the sensed information on whether the object has approached, and the first level has a value higher than the value of the second level.

Подробнее
07-11-2013 дата публикации

PUSH ACTUATION OF INTERFACE CONTROLS

Номер: US20130293471A1
Автор: Langridge Adam Jethro
Принадлежит:

A computing system translates a world space position of a hand of a human target to a screen space cursor position of a user interface. When the cursor overlaps a button in the user interface, the computing system actuates the button in response to a movement of the hand in world space that changes the cursor position along a z-axis regardless of an initial z-axis position of the cursor.

Подробнее
07-11-2013 дата публикации

System and Method For Determining High Resolution Positional Data From Limited Number of Analog Inputs

Номер: US20130297251A1
Принадлежит:

A system and method for determining the position of an object in a space includes positioning the object within the overlapping detection fields of a plurality of analog proximity sensors, wherein the proximity sensors produce an output signal having a signal strength related to the proximity of the object to the sensors. The strength of the output signal produced by each analog proximity sensor can be detected and a position for the object established based on the relative signal strengths produced by the proximity sensors. The system and method have particular application with devices for gestural control, for example gestural controlled dimmer switches, where some data manipulation is required to generate high-resolution positional data to activate the device. 1. A method for determining the position of an object in a space from a plurality of proximity sensors having overlapping detection fields and known relative positions within the space comprising:positioning the object within the overlapping detection fields of the proximity sensors, wherein the proximity sensors are analog sensors that produce an output signal having a signal strength related to the proximity of the object to the sensors,detecting the strength of the output signal produced by each object proximity sensor in response to the presence of the object in the overlapping detection fields of the proximity sensors,determining a position, px, for the object based on the relative signal strengths of the proximity sensors, andgenerating a signal output that is representative of the object's position in the overlapping detection fields of the proximity sensors.2. The method of further comprising the step of using the generated signal output that is representative of the object's position to activate an adjustment control of a device.3. The method of wherein the generated signal output that is representative of the object's position is used to activate a dimmer switch.4. The method of wherein the step ...

Подробнее
14-11-2013 дата публикации

CONTROL SYSTEM WITH INPUT METHOD USING RECOGNITIOIN OF FACIAL EXPRESSIONS

Номер: US20130300650A1
Автор: LIU Hung-Ta
Принадлежит:

Disclosure is related to a control system with an input method using recognition of facial expressions. The system includes an image capturing unit, an image processing unit, a database, and a computing unit. The image capturing unit captures an input image having a facial expression when a user uses lip language. The image processing unit, connected with the image capturing unit, is used to receive and recognize the facial expression shown in the input image. The database stores a plurality of reference images and each of which indicates a corresponding control command. The computing unit, connected with the image processing unit and the database, performs comparison between the facial expression recognized by the image processing unit and the reference images retrieved from the database. The result of comparison finds out the control command which is used to operate an electronic device by this control system. 1. A control system with input method using recognition of facial expressions , comprising:an image capturing unit, retrieving an input image having a user's facial expression, which is the user's lip language or mouth motion when he is talking;an image processing unit, connected with the image capturing unit, receiving and recognizing the facial expression of the input image;a database, recording a plurality of reference images and at least one control command corresponding to every reference image;a computing unit, connected with the image processing unit and the database, receiving the facial expression recognized by the image processing unit and comparing the reference images in the database with the recognized facial expression, for acquiring the control command with respect to the reference image corresponding to the recognized facial expression;wherein, the control system controls an electronic device according to the control command with respect to the inputted facial expression.2. The control system according to claim 1 , further comprising:a ...

Подробнее
14-11-2013 дата публикации

TOUCH DISPLAY DEVICE AND DRIVING METHOD THEREOF

Номер: US20130301195A1
Принадлежит: WINTEK CORPORATION

A touch display device includes a display panel, a plurality of first sensing-series and a plurality of second sensing-series. The display panel includes a first substrate, a second substrate, a plurality of pixel structures and the display medium located between the first substrate and the second substrate. The first sensing-series are on the first substrate. The second sensing-series are on the second substrate. 1. A touch display device , comprising: a first substrate, having a first inner-surface and a first outer-surface opposite to the first inner-surface;', 'a second substrate, having a second inner-surface and a second outer-surface opposite to the second inner-surface;', 'a plurality of pixel structures, located between the first inner-surface and the second inner-surface;', 'a display medium, located between the first substrate and the second substrate;, 'a display panel, comprisinga plurality of first sensing-series, wherein the first outer-surface is located between the first sensing-series and the first inner-surface, and the first sensing-series are insulated from each other; anda plurality of second sensing-series, wherein the second outer-surface is located between the second sensing-series and the second inner-surface, and the second sensing-series are electrically insulated from each other.2. The touch display device as claimed in claim 1 , further comprisinga plurality of first electrode-structures, located between the first inner-surface and the pixel structures, wherein the first electrode-structures are electrically insulated from each other;a plurality of second electrode-patterns, located between the pixel structures and the second inner-surface, wherein the second electrode-patterns are electrically insulated from each other; andthe first sensing-series crossed with the first electrode-structures, and the second sensing-series crossed with the second electrode-patterns.3. The touch display device as claimed in claim 2 , wherein the first ...

Подробнее
21-11-2013 дата публикации

System and Method for Linking Real-World Objects and Object Representations by Pointing

Номер: US20130307846A1
Автор: David Caduff
Принадлежит: Ipointer Inc

A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability.

Подробнее
19-12-2013 дата публикации

MOBILE DEVICE OPERATION USING GRIP INTENSITY

Номер: US20130335319A1
Принадлежит:

Mobile device operation using grip intensity. An embodiment of a mobile device includes a touch sensor to detect contact or proximity by a user of the mobile device; a memory to store indicators of grip intensity in relation to the touch sensor; and a processor to evaluate contact to the touch sensor. The processor is to compare a contact with the touch sensor to the indicators of grip shape and firmness to determine grip intensity, and the mobile device is to receive an input for a function of the mobile device based at least in part on determined grip intensity for the mobile device. 1. A mobile device comprising:a touch sensor to detect contact or proximity by a user of the mobile device;a memory to store indicators of grip intensity in relation to the touch sensor; anda processor to evaluate contact to the touch sensor;wherein the processor is to compare a contact with the touch sensor to the indicators of grip shape and firmness to determine grip intensity; andwherein the mobile device is to receive an input for a function of the mobile device based at least in part on determined grip intensity for the mobile device.2. The mobile device of claim 1 , wherein the touch sensor is a display screen for the mobile device.3. The mobile device of claim 1 , wherein the indicators of grip intensity include data regarding shapes of contact for the touch sensor.4. The mobile device of claim 3 , wherein the data regarding contact shape includes data describing a shape of a hand supporting the mobile device in a resting position.5. The mobile device of claim 1 , wherein the function is a scrolling function of the mobile device for scrolling displayed data.6. The mobile device of claim 5 , wherein a speed of the scrolling of displayed data is to be modified based on the grip intensity.7. The mobile device of claim 1 , wherein the input for the function of the mobile device is a request for the activation and deactivation of the function based on grip intensity.8. The mobile ...

Подробнее
19-12-2013 дата публикации

COMPUTER VISION BASED TWO HAND CONTROL OF CONTENT

Номер: US20130335324A1
Принадлежит: POINTGRAB LTD.

A system and method for manipulating displayed content based on computer vision by using a specific hand posture. In one embodiment a mode is enabled in which content can be manipulated in a typically two handed manipulation (such as zoom and rotate). 1. A method for computer vision based control of displayed content , the method comprising:obtaining an image of a field of view;identifying within the image two hands of a user by detecting hand shapes;detecting a first posture of at least one of the hands; andbased on the detection of the first posture and the identification of the two hands, generating a command to track movement of the two hand shapes to manipulate the displayed content based on a relative position of one hand compared to the other hand according to the movement of the user's hands.2. The method according to claim 1 , comprising:detecting a second posture of at least one of the hands, said second posture being different than the first posture; anddisabling the command to select and manipulate the displayed content based on detection of the second posture.3. The method according to claim 1 , wherein the first posture comprises a hand with the tips of all fingers brought together such that the tips touch or almost touch each other.4. The method according to claim 2 , wherein the second posture comprises a palm with all fingers extended.5. The method according to claim 1 , wherein the manipulation of displayed content comprises zooming in and out of the content or rotating the content or a combination thereof.6. The method according to claim 1 , comprising displaying at least one icon correlating to at least one of the user's two hands and enabling to move the icon according to the hand's movement.7. The method according to claim 2 , comprising displaying a first icon when the first posture is detected and displaying a second icon when the second posture is detected.8. The method according to claim 2 , comprising:detecting a change of posture of the ...

Подробнее
19-12-2013 дата публикации

Dynamic adaptation of imaging parameters

Номер: US20130335576A1
Принадлежит: INFINEON TECHNOLOGIES AG

Representative implementations of devices and techniques provide adaptable settings for imaging devices and systems. Operating modes may be defined based on whether an object is detected within a preselected area. One or more parameters of emitted electromagnetic radiation may be dynamically adjusted based on the present operating mode.

Подробнее
26-12-2013 дата публикации

ELECTRONIC DEVICES

Номер: US20130342450A1
Автор: Tetsuhashi Hideaki
Принадлежит: NEC Corporation

Sensors (-) to (-) detect motion of a target or shape of a target or motion and shape of a target. Display section () displays an icon that denotes that sensors (-) to (-) are detecting the target. 1. An electronic device comprising:a sensor that detects motion of a target or shape of a target or motion and shape of a target; anda display section that displays an icon that denotes that said sensor is detecting the target.2. The electronic device as set forth in claim 1 , further comprising:a plurality of said sensors,wherein said display section displays an icon corresponding to a sensor, that is detecting the target, from among the sensors.3. The electronic device as set forth in claim 1 ,wherein said sensor is a camera having an image capturing function and detects motion of the target or shape of the target or motion and shape of the target being captured.4. The electronic device as set forth in claim 3 ,wherein if said sensor detects a face, said display section displays an instruction that causes the position of the face being captured to be moved to a position where said sensor needs to be placed to detect the face.5. The electronic device as set forth in claim 1 ,wherein said display section displays said icon in a peripheral display area of the screen.6. The electronic device as set forth in claim 1 ,wherein while said sensor is detecting the motion of the target, said display section displays an icon that depicts the motion.7. The electronic device as set forth in claim 1 ,wherein said display section is replaced with a sound output section that outputs a sound that denotes that said sensor is detecting the target.8. A notification method that notifies a user who uses an electronic device of information claim 1 , comprising processes of:causing a sensor to detect motion of a target or shape of a target or motion and shape of a target; anddisplaying an icon that denotes that said sensor is detecting the target.9. The notification method as set forth in claim ...

Подробнее
26-12-2013 дата публикации

Gesture based human interfaces

Номер: US20130343601A1
Принадлежит: Hewlett Packard Development Co LP

A method for implementing gesture based human interfaces includes segmenting data generated by an IR camera of an active area and detecting objects in an active area. The objects are distinguished as either island objects or peninsula objects and a human hand is identified from among the peninsula objects. The motion of the human hand is tracked as a function of time and a gesture made by the human hand is recognized.

Подробнее
02-01-2014 дата публикации

METHODS AND SYSTEMS FOR INTERACTION WITH AN EXPANDED INFORMATION SPACE

Номер: US20140002351A1
Автор: Nakayama Ryuji
Принадлежит:

A method for presenting content on a display screen is provided. The method initiates with presenting first content on the display screen, the first content being associated with a first detected viewing position of a user that is identified in a region in front of the display screen. At least part of second content is presented on the display screen along with the first content, the second content being progressively displayed along a side of the display screen in proportional response to a movement of the user from the first detected viewing position to a second detected viewing position of the user. 1. A method for presenting content on a display screen , comprising:presenting first content on the display screen, the first content being associated with a first detected viewing position of a user that is identified in a region in front of the display screen;presenting at least part of second content on the display screen along with the first content, the second content being progressively displayed along a side of the display screen in proportional response to a movement of the user from the first detected viewing position to a second detected viewing position of the user.2. The method of claim 1 , wherein the at least part of the second content is presented as a blended transition with the first content.3. The method of claim 1 , wherein the side of the display screen is substantially opposite a lateral direction relative to the display defined by the change from the first detected viewing position to the second detected viewing position.4. The method of claim 1 , wherein the proportional response is linear or non-linear.5. The method of claim 1 , wherein the second content is presented as a perspective projection to provide an appearance on the display screen of the second content oriented at an angle relative to the first content.6. The method of claim 1 , wherein in response to a radial movement of the user relative to the display screen claim 1 , the radial ...

Подробнее
02-01-2014 дата публикации

Tracking Poses of 3D Camera Using Points and Planes

Номер: US20140002597A1

A method registers data using a set of primitives including points and planes. First, the method selects a first set of primitives from the data in a first coordinate system, wherein the first set of primitives includes at least three primitives and at least one plane. A transformation is predicted from the first coordinate system to a second coordinate system. The first set of primitives is transformed to the second coordinate system using the transformation. A second set of primitives is determined according to the first set of primitives transformed to the second coordinate system. Then, the second coordinate system is registered with the first coordinate system using the first set of primitives in the first coordinate system and the second set of primitives in the second coordinate system. The registration can he used to track a pose of a camera acquiring the data.

Подробнее
09-01-2014 дата публикации

USER INTERFACE METHOD AND APPARATUS THEREFOR

Номер: US20140009388A1
Принадлежит:

A User Interface method and an apparatus therefor are provided. The method includes dividing a photographing region into a plurality of portions of the photographing region, acquiring corresponding information from respective image signals output while being classified according to the divided portion of the photographing region, verifying a command corresponding to the acquired information, and performing an operation according to the verified command. 1. A method comprising:dividing a photographing region into a plurality of portions of the photographing region;acquiring corresponding information from respective image signals output while being classified according to the divided portion of the photographing region;verifying a command corresponding to the acquired information; andperforming an operation according to the verified command.2. The method of claim 1 , wherein the acquiring of the corresponding information from the respective image signals output while being classified according to the divided portion of the photographing region comprises:acquiring information from an image signal output from a previous portion of the photographing region among the divided portion of the photographing region; andacquiring information from an image signal output from a next portion of the photographing region after a threshold time.3. The method of claim 1 , wherein the acquiring of the corresponding information from the respective image signals output while being classified according to the divided portion of the photographing region comprises:performing an operation according to a command corresponding to information acquired from an image signal output from a previous portion of the photographing region among the divided portion of the photographing region; andacquiring information from an image signal output from a next portion of the photographing region when the operation is completed.4. The method of claim 1 , wherein the acquiring of the corresponding information ...

Подробнее
09-01-2014 дата публикации

Electronic Information Terminal and Display Method of Electronic Information Terminal

Номер: US20140009389A1
Принадлежит:

This electronic information terminal includes a display panel having a rectangular display area, an angle sensor detecting rotation angle information, and a display control portion controlling switching of the display orientation of the display panel on the basis of a detection result of the angle sensor. The display control portion is configured not to switch the display orientation when the turning angle velocity of the display panel based on the rotation angle information is smaller than a first threshold. 1. An electronic information terminal comprising:a display panel having a rectangular display area;an angle sensor detecting rotation angle information; anda display control portion controlling switching of a display orientation of the display panel on the basis of a detection result of the angle sensor, whereinthe display control portion is configured not to switch the display orientation when a turning angle velocity of the display panel based on the rotation angle information is smaller than a first threshold.2. The electronic information terminal according to claim 1 , whereinthe angle sensor is configured to be capable of detecting an inclination angle of the display panel with respect to a horizontal plane, andthe display control portion is configured not to switch the display orientation when the turning angle velocity is smaller than the first threshold and the inclination angle is smaller than a second threshold.3. The electronic information terminal according to claim 2 , whereinthe angle sensor includes:a first measurement portion measuring an inclination angle of a vertical axis along a short-side direction of the display panel with respect to the horizontal plane, anda second measurement portion measuring an inclination angle of a horizontal axis along a longitudinal direction of the display panel with respect to the horizontal plane, andthe display control portion includes a determination portion determining the display orientation of the display ...

Подробнее
23-01-2014 дата публикации

Augmented reality apparatus

Номер: US20140022283A1
Принадлежит: UNIVERSITY HEALTH NETWORK

Augmented reality apparatus ( 10 ) for use during intervention procedures on a trackable intervention site ( 32 ) are disclosed herein. The apparatus ( 10 ) comprises a data processor ( 28 ), a trackable projector ( 14 ) and a medium ( 30 ) including machine-readable instructions executable by the processor ( 28 ). The projector ( 14 ) is configured to project an image overlaying the intervention site ( 32 ) based on instructions from the data processor ( 28 ). The machine readable instructions are configured to cause the processor ( 28 ) to determine a spatial relationship between the projector ( 14 ) and the intervention site ( 32 ) based on a tracked position and orientation of the projector ( 14 ) and on a tracked position and orientation of the intervention site ( 32 ). The machine readable instructions are also configured to cause the processor to generate data representative of the image projected by the projector based on the determined spatial relationship between the projector and the intervention site.

Подробнее
23-01-2014 дата публикации

Depth mapping using time-coded illumination

Номер: US20140022348A1
Автор: Alexander Shpunt
Принадлежит: PRIMESENSE LTD

A method for depth mapping includes illuminating an object with a time-coded pattern and capturing images of the time-coded pattern on the object using a matrix of detector elements. The time-coded pattern in the captured images is decoded using processing circuitry embedded in each of the detector elements so as to generate respective digital shift values, which are converted into depth coordinates.

Подробнее
30-01-2014 дата публикации

APPARATUS, SYSTEM, AND METHOD FOR AUTOMATIC IDENTIFICATION OF SENSOR PLACEMENT

Номер: US20140032165A1
Принадлежит:

A location of a sensor is determined by: (1) receiving time series data including components in a plurality of dimensions, wherein the time series data correspond to measurements of the sensor that is applied to a subject; (2) determining a plurality of subsequences associated with the time series data, wherein each of the plurality of subsequences represents a characteristic pattern projected along one of the plurality of dimensions; (3) identifying a correlated subset of the plurality of subsequences as at least one instance of an activity of the subject; and (4) based on features of the correlated subset, determining the location of the sensor as applied to the subject. 1. A non-transitory computer-readable storage medium , comprising executable instructions to:receive time series data including components in a plurality of dimensions, wherein the time series data correspond to measurements of a sensor that is applied to a subject;determine a plurality of subsequences associated with the time series data, wherein each of the plurality of subsequences represents a characteristic pattern projected along one of the plurality of dimensions;identify a correlated subset of the plurality of subsequences as at least one instance of an activity of the subject; andbased on features of the correlated subset, determine a location of the sensor as applied to the subject.2. The non-transitory computer-readable storage medium of claim 1 , wherein the executable instructions to determine the plurality of subsequences include executable instructions to:determine at least one subsequence as disposed between two consecutive stable regions in the time series data.3. The non-transitory computer-readable storage medium of claim 1 , wherein the executable instructions to identify the correlated subset include executable instructions to:perform graph clustering on vertices corresponding to the plurality of subsequences.4. The non-transitory computer-readable storage medium of claim 3 , ...

Подробнее
06-02-2014 дата публикации

METHOD OF SIMULATING AN IMAGING EFFECT ON A DIGITAL IMAGE USING A COMPUTING DEVICE

Номер: US20140035953A1
Автор: DuBois Charles L.
Принадлежит: FUJIFILM NORTH AMERICA CORPORATION

A method of digitally simulating an imaging effect on a base digital image being displayed by a computing device, which is representative of a print or document including the image effect, is provided. In one aspect, the imaging effect is displayed in association with the base digital image as a function of the position of the display of the computing device relative to a first or normalized position of the display. In another aspect, the imaging effect is displayed in association with the base digital image as a function of the position of an object being captured by a camera of the computing device. In both instances, the imaging effect becomes more visible as the display of the device moves further from its first position, or the object captured by the camera moves from its original position. 1. A method for simulating an image effect on a first digital image , the first digital image being displayed on a display of a computing device , the method comprising:providing the first digital image;providing a second digital image representative of the image effect;associating the second digital image with the first digital image;displaying only the first digital image when the display of the computing device is positioned in a first orientation; anddisplaying the second digital image in association with the first digital image when the display of the computing device is moved to a second orientation.2. A method in accordance with claim 1 , wherein the step of associating the second digital image with the first digital image includes overlaying the second digital image and the first digital image.3. A method in accordance with claim 2 , wherein the step of displaying the second digital image in association with the first digital image includes displaying the second digital image on top of the first digital image.4. A method in accordance with claim 1 , wherein the first digital image is edited prior to being associated with the second digital image.5. A method in ...

Подробнее
06-02-2014 дата публикации

Context-driven adjustment of camera parameters

Номер: US20140037135A1
Принадлежит: Omek Interactive Ltd

A system and method for adjusting the parameters of a camera based upon the elements in an imaged scene are described. The frame rate at which the camera captures images can be adjusted based upon whether the object of interest appears in the camera's field of view to improve the camera's power consumption. The exposure time can be set based on the distance of an object form the camera to improve the quality of the acquired camera data.

Подробнее