Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1444. Отображено 100.
18-04-2013 дата публикации

SYSTEMS AND METHODS FOR EYE TRACKING USING RETROREFLECTOR-ENCODED INFORMATION

Номер: US20130094712A1
Автор: Said Amir
Принадлежит:

Embodiments of the present invention are directed to eye tracking systems and methods that can be used in uncontrolled environments and under a variety of lighting conditions. In one aspect, an eye tracking system () includes a light source () configured to emit infrared (“IR”) light, and an optical sensor () disposed adjacent to the light source and configured to detect IR light. The system also includes one or more retroreflectors () disposed on headgear. The one or more retroreflectors are configured to reflect the IR light back toward the light source. The reflected IR light is captured as IR images by the optical sensor. The IR images provide information regarding the location and head orientation of a person wearing the headgear. 1200. An eye tracking system () comprising:{'b': '204', 'a light source () configured to emit infrared (“IR”) light;'}{'b': '206', 'an optical sensor () disposed adjacent to the light source and configured to detect IR light; and'}{'b': '210', 'one or more retroreflectors () disposed on headgear, wherein the one or more retroreflectors are configured to reflect the IR light back toward the light source, and wherein the reflected IR light is captured as IR images by the optical sensor, the IR images provide information regarding the location and head orientation of a person wearing the headgear.'}2402502610. The system of claim 1 , wherein the one or more retroreflectors disposed on the headgear further comprises the retroreflectors arranged to produce an identifiable reflection pattern in the IR images ( claim 1 , claim 1 ,).3706905908. The system of claim 1 , wherein each of the one or more retroreflectors disposed on the headgear further comprises the retroreflectors configured to produce identifiable shapes in the IR images ( claim 1 , claim 1 ,).4. The system of claim 1 , wherein each of the one or more retroreflectors disposed on the headgear further comprises the retroreflectors configured to produce identifiable shapes in the ...

Подробнее
18-04-2013 дата публикации

MOVING OBJECT DETECTION DEVICE

Номер: US20130094759A1
Принадлежит: OSAKA UNIVERSITY

A moving object detection device includes a window setting unit configured to set a window having a predetermined volume in a video, an orientation of spatial intensity gradient calculation unit configured to calculate, for each pixel included in the window, an orientation of spatial intensity gradient, a spatial histogram calculation unit configured to calculate a spatial histogram that is a histogram of the orientation of spatial intensity gradient within the window, an orientation of temporal intensity gradient calculation unit configured to calculate, for each pixel included in the window, an orientation of temporal intensity gradient, a temporal histogram calculation unit configured to calculate a temporal histogram that is a histogram of an orientation of temporal intensity gradient within the window, and a determination unit configured to determine whether or not the moving object is included within the window based on the spatial histogram and the temporal histogram. 113-. (canceled)14. A moving object detection device which detects a moving object from a video , the moving object detection device comprising:a window setting unit configured to set a window having a predetermined volume in the video that is a three-dimensional image in which two-dimensional images are arranged in a temporal axis direction;an orientation of spatial intensity gradient calculation unit configured to calculate, for each pixel included in the window, an orientation of spatial intensity gradient that is an orientation of spatial gradient of intensity;a spatial histogram calculation unit configured to calculate a spatial histogram that is a histogram of the orientation of spatial intensity gradient within the window;an orientation of temporal intensity gradient calculation unit configured to calculate, for each pixel included in the window, an orientation of temporal intensity gradient that is an orientation of temporal gradient of intensity;a temporal histogram calculation unit ...

Подробнее
09-05-2013 дата публикации

Method for Detecting a Target in Stereoscopic Images by Learning and Statistical Classification on the Basis of a Probability Law

Номер: US20130114858A1

A method for the detection of a target present in at least two images of the same scene acquired simultaneously by different cameras comprises, under development conditions, a prior target-learning step, said learning step including a step of modeling of the data X corresponding to an area of interest in the images by a distribution law P such that P(X)=P(X,X,X)=P(X)P(X)P(X) where Xare the luminance data in the area of interest, Xare the depth data in the area of interest, and Xare the movement data in the area of interest. The method also comprises, under operating conditions, a simultaneous step of classification of objects present in the images, the target being regarded as detected when an object is classified as being one of the targets learnt during the learning step. Application: monitoring, assistance and security on the basis of stereoscopic images. 26.-. (canceled)7. The method as claimed in claim 1 , in which the modeling step comprises a step of calculation claim 1 , in each pixel of the area of interest in one of the two images claim 1 , of the values mx and my of the movement of the pixel position according to two orthogonal directions in relation to a preceding image claim 1 , the movement data Xof the area of interest being modeled by a plurality of α-periodic and/or β-periodic Von Mises-Fischer laws claim 1 , the periods of which differ from one another claim 1 , each law describing a distribution of the normal unit vectors on the planes of equation z=mx.x+my.y corresponding to all of the pixels of the area of interest claim 1 , P(X) being obtained by the product of the different Von Mises-Fischer laws modeling the movement data X.8. The method as claimed in claim 1 , in which the modeling step comprises a step of calculation claim 1 , in each pixel of the area of interest in the disparity image corresponding to the two images claim 1 , of the values px and py of the derivatives of the pixel depth according to two orthogonal directions claim 1 , the ...

Подробнее
16-05-2013 дата публикации

SYSTEMS AND METHODS FOR ANALYSIS OF VIDEO CONTENT, EVENT NOTIFICATION, AND VIDEO CONTENT PROVISION

Номер: US20130121527A1
Принадлежит: CERNIUM CORPORATION

A method for remote event notification over a data network is disclosed. The method includes receiving video data from any source, analyzing the video data with reference to a profile to select a segment of interest associated with an event of significance, encoding the segment of interest, and sending to a user a representation of the segment of interest for display at a user display device. A further method for sharing video data based on content according to a user-defined profile over a data network is disclosed. The method includes receiving the video data, analyzing the video data for relevant content according to the profile, consulting a profile to determine a treatment of the relevant content, and sending data representative of the relevant content according to the treatment. 1. A non-transitory processor-readable medium storing code representing instructions to be executed by a processor , the code comprising code to cause the processor toreceive video data having an event of significance;analyze the video data with reference to a profile to select a segment of interest associated with the event of significance;identify a parameter associated with the event of significance;encode the segment of interest to produce an encoded segment of interest; andsend to a user over a network a representation of the encoded segment of interest for display at a user display device based on a notice priority associated with the parameter of the event of significance.2. The non-transitory processor-readable medium of claim 1 , wherein the code to cause the processor to send includes code to cause the processor to send data associated with the encoded segment of interest to the user display device via the network to trigger the user display device to download the encoded segment of interest from a storage server.3. The non-transitory processor-readable medium of claim 1 , wherein the representation of the encoded segment of interest includes a compressed portion of the ...

Подробнее
16-05-2013 дата публикации

MICROSCOPY METHOD FOR IDENTIFYING BIOLOGICAL TARGET OBJECTS

Номер: US20130121530A1
Автор: BICKERT Stefan, HING Paul
Принадлежит: SENSOVATION AG

The invention relates to a microscopy method for identifying target objects () having a predetermined optical property in material () to be analyzed. 132632. A microscopy method for identifying target objects () in material () to be analyzed , the target objects () having a predetermined optical property , wherein{'b': 36', '14', '4', '6, 'in a first step, an overview field of view () of a microscope optical system () is directed to an overview region of a sample carrier () containing the material () to be analyzed;'}{'b': 6', '16', '4', '48', '8, 'the material () to be analyzed is illuminated by an illumination unit (), which irradiates the sample carrier () from outside a field of view tube (), and is recorded by a camera ();'}{'b': 6', '32', '6, 'the material () to be analyzed is optically analyzed for the optical property such that even a single target object () having the predetermined optical property is identified as such in the material () to be analyzed;'}{'b': 52', '14', '32', '32, 'in a subsequent second step, a target field of view () of the microscope optical system () is aligned with a target region around the target object () using the known position of the target object (); and'}{'b': '32', 'the identified target object () is analyzed in a manner differentiated for various additional optical properties.'}25236. The microscopy method according to claim 1 , characterized in that the target field of view () has a magnification by at least a factor of 3 over the overview field of view ().33232. The microscopy method according to claim 1 , characterized in that the identification of the target object () in the first step is carried out with a first optical method and the differentiation of the target object () in the second step is carried out with a second optical method which is different from the first.461832. The microscopy method according to claim 1 , characterized in that claim 1 , in the second step claim 1 , the material () to be analyzed is ...

Подробнее
16-05-2013 дата публикации

POSITION AND ORIENTATION MEASUREMENT APPARATUS,POSITION AND ORIENTATION MEASUREMENT METHOD, AND STORAGE MEDIUM

Номер: US20130121592A1
Принадлежит: CANON KABUSHIKI KAISHA

An apparatus comprises: extraction means for extracting an occluded region in which illumination irradiated onto the target object is occluded in an obtained two-dimensional image; projection means for projecting a line segment that constitutes a three-dimensional model onto the two-dimensional image based on approximate values of position/orientation of the target object; association means for associating a point that constitutes the projected line segment with a point that constitutes an edge in the two-dimensional image; determination means for determining whether the associated point that constitutes an edge in the two-dimensional image is present within the occluded region; and measurement means for measuring the position/orientation of the target object based on a distance on the two-dimensional image between the point that constitutes the projected line segment and the point that constitutes the edge, the points being associated as the pair, and a determination result. 1. A position and orientation measurement apparatus comprising:storage means for storing a three-dimensional model of a target object;two-dimensional image obtaining means for obtaining a two-dimensional image of the target object;extraction means for extracting an occluded region in which illumination irradiated onto the target object is occluded in the two-dimensional image;projection means for projecting a line segment that constitutes the three-dimensional model onto the two-dimensional image based on approximate values of position and orientation of the target object;association means for associating a point that constitutes the projected line segment with a point that constitutes an edge in the two-dimensional image as a pair;determination means for determining whether the associated point that constitutes an edge in the two-dimensional image is present within the occluded region; andposition and orientation measurement means for measuring the position and orientation of the target object ...

Подробнее
30-05-2013 дата публикации

METHOD FOR COUNTING OBJECTS AND APPARATUS USING A PLURALITY OF SENSORS

Номер: US20130136307A1
Автор: Kim Sungjin, Yu Jaeshin
Принадлежит:

According to one embodiment of the present invention, a method for counting objects involves using an image sensor and a depth sensor, and comprises the steps of: acquiring an image from the image sensor and acquiring a depth map from the depth sensor, the depth map indicating depth information on the subject in the image; acquiring boundary information on objects in the image; applying the boundary information to the depth map to generate a corrected depth map; identifying the depth pattern of the objects from the corrected depth map; and counting the identified objects. 1. A method for counting objects using an image sensor and a depth sensor , the method comprising:acquiring an image by the image sensor, and acquiring a depth map from the depth sensor, the depth map indicating depth information on the subject in the image;acquiring edge information of objects of interest in the image;generating a corrected depth map by applying the edge information to the depth map;identifying depth patterns of the objects of interest from the corrected depth map; andcounting the identified objects of interest.2. The method of claim 1 , wherein in the step of identifying depth patterns of the objects of interest claim 1 , depth patterns of a reference object stored in a depth pattern database claim 1 , is compared with the depth patterns of the corrected depth map.3. The method of claim 1 , further comprising tracking movement of the identified depth patterns of the objects of interest.4. A camera apparatus claim 1 , comprising:an image sensor configured to generate an image captured with respect to objects of interest;a depth sensor configured to generate a depth map indicating depth information on the subject in the image; anda controller configured to acquire edge information of the objects of interest in the image, configured to generate a corrected depth map by applying the edge information to the depth map, configured to identify depth patterns of the objects of interest ...

Подробнее
06-06-2013 дата публикации

Method of tracking targets in video data

Номер: US20130142432A1
Автор: Trevor Michael WOOD
Принадлежит: Oxford University Innovation Ltd

A method of tracking targets in video data. At each of a sequence of time steps, a set of weighted probability distribution components is derived. At each time step the following steps are performed. First, a new set of components from the components of the previous time step are derived in accordance with a predefined motion model for the targets. The video at the current time step is then analysed to obtain a set of measurements, and the new set of components is updated using the measurements in accordance with a predefined measurement model. Finally, the set of components derived at each time step are analysed to derive a set of tracks for the targets.

Подробнее
13-06-2013 дата публикации

KEY-FRAME SELECTION FOR PARALLEL TRACKING AND MAPPING

Номер: US20130148851A1
Принадлежит: CANON KABUSHIKI KAISHA

A method of selecting a first image from a plurality of images for constructing a coordinate system of an augmented reality system. A first image feature in the first image corresponding to the feature of the marker is determined A second image feature in a second image is determined based on a second pose of a camera, said second image feature having a visual match to the first image feature. A reconstructed position of the feature of the marker in a three-dimensional (3D) space is determined based on positions of the first and second image features, the first and the second camera pose. A reconstruction error is determined based on the reconstructed position of the feature of the marker and a pre-determined position of the marker. 1. A method of selecting a key image from a plurality of images to construct a coordinate system of an augmented reality system , the method comprising:determining a first image feature in a first image corresponding to a marker, wherein the first image feature is determined based on a first camera pose of a camera used to capture the first image;determining a second image feature in a second image based on a second camera pose of a camera used to capture the second image, wherein the second image feature includes a visual match to the first image feature;determining a reconstructed position of the marker in a three dimensional (3D) space based on positions of the first image feature in the first image and the second image feature in the second image, and the first and second camera pose;determining a reconstruction error based on the determined reconstructed position of the marker and a pre-determined position of the marker in the 3D space; andselecting at least one of the first and second images as the key image for constructing the coordinate system of the augmented reality system in an event that the determined reconstruction error satisfies a pre-determined criterion for constructing the coordinate system.2. A method according to ...

Подробнее
13-06-2013 дата публикации

POSITIONING INFORMATION FORMING DEVICE, DETECTION DEVICE, AND POSITIONING INFORMATION FORMING METHOD

Номер: US20130148855A1
Принадлежит: Panasonic Corporation

Provided is a positioning information forming device which improves object detection accuracy. This device comprises a synthesis unit () which synthesizes camera distance map information and radar distance map information and generates “synthesized map information”. This synthesized map information is used for object detection processing by a detection device (). In this way it is possible to improve object detection accuracy by being able to detect objects based on information in which the camera distance map information and radar distance map information have been synthesized. In other words, by synthesizing the camera distance map information and radar distance map information, it is possible to remove unnecessary noise due to reflection from the ground and walls, etc. and therefore set object detection thresholds to low values. It is therefore possible to detect even objects the detection of which was judged to be impossible in the past. 110-. (canceled)11. A positioning information forming apparatus that forms positioning information used for detection of an object on the basis of an image based on information detected by a radar , and on an image taken by a stereo camera , comprising;a first coordinate transforming section that transforms a coordinate system of radar distance map information associating a coordinate group of the image based on the information with distance information, into a reference coordinate system defined by a parallax-axis and one of the coordinate axes of an image plane coordinate system at a virtual position of installation of the positioning information forming apparatus;a first smoothing section that smooths the transformed radar distance map information by applying a first smoothing filter only in the one of the coordinate axes direction;a second coordinate transforming section that transforms a coordinate system of camera distance map information associating a coordinate group of the taken image with distance information, into a ...

Подробнее
20-06-2013 дата публикации

DIAGNOSIS ASSISTANCE SYSTEM AND COMPUTER READABLE STORAGE MEDIUM

Номер: US20130156267A1
Принадлежит: Konica Minolta Medical & Graphic, Inc.

Provided is a diagnosis assistance system. The system includes, an imaging unit, an analysis unit, an operation unit, and a display unit. The analysis unit extracts a subject region from each of the plurality of image frames generated by the imaging unit, divides the extracted subject region into a plurality of regions, and analyzes the divided regions correlated among the plurality of image frames, thereby calculating a predetermined feature quantity indicating motions of the divided regions. The operation unit allows a user to select a region serving as a display target of an analysis result by the analysis unit. The display unit displays the calculated feature quantity regarding the selected region. 1. A diagnosis assistance system comprising:an imaging unit which performs dynamic imaging for a subject and which generates a plurality of successive image frames;an analysis unit which extracts a subject region from each of the plurality of generated image frames, which divides the extracted subject region into a plurality of regions, and which analyzes the divided regions correlated among the plurality of image frames, thereby calculating a predetermined feature quantity indicating motions of the divided regions;an operation unit which allows a user to select a region serving as a display target of an analysis result by the analysis unit from among the divided regions; anda display unit which displays the feature quantity regarding the region selected by the operation unit, the feature quantity being calculated by the analysis unit.2. The diagnosis assistance system of claim 1 ,wherein the analysis unit further calculates a predetermined feature quantity indicating a motion of a whole of the subject region, andthe display unit simultaneously displays the feature quantity indicating a motion of the region selected by the operation unit, the motion being calculated by the analysis unit, and the feature quantity indicating the motion of the whole of the subject region ...

Подробнее
04-07-2013 дата публикации

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20130170703A1
Автор: Tsurumi Shingo
Принадлежит: SONY CORPORATION

An image processing device for recognizing an object corresponding to a registered image registered beforehand from an imaged image, comprising: an obtaining unit configured to obtain the imaged image; a recognizing unit configured to recognize an object corresponding to the registered image from the imaged image; and a detecting unit configured to detect, based on a registered image corresponding to an object recognized from the imaged image thereof, an area where another object is overlapped with the object corresponding to the registered image thereof. 111-. (canceled)12. An image processing device , comprising:an imaging unit configured to obtain an input image by imaging a subject;a recognition unit configured to recognize a first object, corresponding to a registered image, from the input image, wherein the registered image is registered beforehand in the image processing device;an image comparing unit configured to detect, based on the registered image, a first area of the input image where a second object overlaps with the first object; anda display control unit configured to display an icon in a second area of the input image other than the first area.13. The image processing device of claim 12 , wherein the image comparing unit detects the first area based on a difference between a luminance of the first object and a luminance of the registered image.14. The image processing device of claim 13 , further comprising:an image correcting unit for correcting at least one of the luminance of the first object and the luminance of the registered image, such that the luminance of the first object is about equal to the luminance of the registered image.15. The image processing device of claim 14 , further comprising: determine a position of the icon based on a position of the first area; and', 'supply the icon position to the display control unit., 'an icon generation unit configured to16. The image processing device of claim 14 , wherein:the recognition unit is ...

Подробнее
11-07-2013 дата публикации

TRAVEL PATH ESTIMATION APPARATUS AND PROGRAM

Номер: US20130177211A1
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

A characteristic point extraction section acquires an image captured by an image capture device and extracts characteristic points from the captured image, a vehicle lane boundary point selection section selects vehicle lane boundary points that indicate vehicle lanes from the extracted characteristic points, a distribution determination section determines the distribution of the vehicle lane boundary points, a system noise setting section sets each system noise based on the distribution of vehicle lane boundary points, and a travel path parameter estimation section stably predicts travel path parameters based on the vehicle lane boundary points, past estimation results, and the system noise that has been set. 1. A travel path estimation apparatus comprising:an acquisition section for acquiring a captured image of a periphery of a vehicle;an extraction section for extracting, from the captured image acquired by the acquisition section, characteristic points indicating vehicle lanes;a setting section for, based on a distribution of the characteristic points extracted by the extraction section, setting system noise expressing variation of travel path parameters when estimating travel path parameters related to a position or an angle of the vehicle itself with respect to a travel path for travel by the vehicle itself and related to a shape or a size of the travel path; andan estimation section for estimating the travel path parameters by probability signal processing using a discrete time signal based on the characteristic points extracted by the extraction section, a previous estimation result of the travel path parameters, and the system noise set by the setting section.2. The travel path estimation apparatus of claim 1 , wherein the travel path parameters related to the position and the angle of the vehicle itself with respect to the travel path include a lateral position of the vehicle itself with respect to the travel path claim 1 , a yaw angle with respect to a ...

Подробнее
15-08-2013 дата публикации

VIDEO ANALYTICS CONFIGURATION

Номер: US20130208124A1
Принадлежит: IPSOTEK LTD

Apparatus is disclosed which is operative to analyse a sequence of video frames of a camera view field to track an object in said view field and determine start and end points of said track in said view field. The apparatus also determines a start and end time for the said track corresponding to said start and end points respectively; and stores said start and end points and said start and end times as attributes of said track. 1. Apparatus operative to:analyze a sequence of video frames of a camera view field to track an object in said view field;determine start and end points of said track in said view field;determine a start and end time for the said track corresponding to said start and end points respectively; andstore said start and end points and said start and end times as attributes of said track.2. Apparatus according to claim 1 , further operative on a sequence of video frames of a plurality of surveillance network camera view fields to track an object in respective view fields and store respective start and end points and start and end times as attributes of each said track for respective view fields.3. Apparatus according to claim 2 , further operative to:determine a temporal relationship between an end time of a track in a first view field and a start time of a track in a second view field;based on said temporal relationship determine a likelihood value of a transition of said track in said first view field to said track in said second view field; andstore said likelihood value.4. Apparatus according to claim 3 , wherein said temporal relationship is based upon a spatial relationship in physical space between a start point corresponding to said start time and an end point corresponding to said end time.5. Apparatus according to claim 3 , further operative to:track plural objects in said first and second view fields and determine corresponding plural start and end points;determine start and end zones for said first and second view fields based on said ...

Подробнее
29-08-2013 дата публикации

RECOGNITION SYSTEM, RECOGNITION METHOD AND COMPUTER READABLE MEDIUM

Номер: US20130223680A1
Принадлежит: TOSHIBA TEC KABUSHIKI KAISHA

A recognition system includes an acquisition module configured to acquire an image data generated by an image sensor, a first generation module configured to generate a graphical user interface which contains the image data, and an input module configured to detect an input on the graphical user interface, the input indicating a position designation on the image data. The recognition system further includes a second generation module configured to overlap a frame-line on the image data of the graphical user interface based on the position designation detected by the input module, and a calculation module configured to calculate one or more feature values of an object image within the frame-line. 1. A recognition system , comprising:an acquisition module configured to acquire an image data generated by an image sensor;a first generation module configured to generate a graphical user interface which contains the image data;an input module configured to detect an input on the graphical user interface, the input indicating a position designation on the image data;a second generation module configured to overlap a frame-line on the image data of the graphical user interface based on the position designation detected by the input module; anda calculation module configured to calculate one or more feature values of an object image within the frame-line.2. The recognition system according to claim 1 , whereinthe input module is configured to detect a plurality of inputs, each corresponding to a different position designation,the second generation module is configured to overlap a plurality of frame-lines on the imaged data of the graphical user interface, each frame line corresponding to one of the different position designations.3. The recognition system according to claim 1 , whereinthe graphical user interface includes a button, andwhen the input module detects an input corresponding to the button, the calculation module is configured to start to calculate the one or ...

Подробнее
29-08-2013 дата публикации

ARTICLE RECOGNITION SYSTEM AND ARTICLE RECOGNITION METHOD

Номер: US20130223682A1
Принадлежит: TOSHIBA TEC KABUSHIKI KAISHA

According to embodiments, an article recognition system is disclosed. The article recognition system comprises an image sensor configured to capture an image of an article, and a determining module configured to determine a value indicative of darkness of the captured image and compare the determined value with a reference value. The article recognition system further comprises a changing module configured to change the reference value when the determined value is less than the reference value, and an extracting module configured to identify the article on the basis of the captured image when the determined value is greater than the reference value. 1. An article recognition system comprising:an image sensor configured to capture an image of an article;a determining module configured to determine a value indicative of darkness of the captured image and compare the determined value with a reference value;a changing module configured to change the reference value when the determined value is less than the reference value; andan extracting module configured to identify the article on the basis of the captured image when the determined value is greater than the reference value.2. The article recognition system according to claim 1 , wherein the changing module changes the reference value by changing an output of a light source that reflects light from the article to the image sensor.3. The article recognition system according to claim 1 , further comprising:a similarity calculating module configured to compare a darkness level of the captured image to a darkness level in each of a plurality of stored image data and calculate similarity values, wherein the extracting module identifies the article on the basis of the calculated similarity values of the captured image.4. The article recognition system according to claim 3 , further comprising:a database that holds a plurality of records, each record having a commodity code and a plurality of similarity levels corresponding ...

Подробнее
05-09-2013 дата публикации

Camera to Track an Object

Номер: US20130229529A1
Автор: Lablans Peter
Принадлежит:

Methods and apparatus to create and display screen stereoscopic and panoramic images are disclosed. Methods and apparatus are provided to generate multiple images that are combined into a stereoscopic or a panoramic image. A controller provides correct camera settings for different conditions. A controller rotationally aligns images of lens/sensor units that are rotationally misaligned. A compact controllable platform holds and rotates a camera. A remote computing device with a camera and a digital compass tracks an object causing the camera in the platform to track the object. 1. A method for tracking an object with a camera , comprising:the camera removably positioned in a holder attached to an actuator attached to a housing, the actuator enabled to rotate the holder relative to the housing, the camera recording an image of the object;a second camera located at a remote location including a display displaying the image of the object recorded by the camera and an image of the object recorded by the second camera being part of a mobile computing device including a digital compass and a processor; anda user tracking the object with the second camera causing the camera to track the object based on data from the digital compass.2. The method of claim 1 , wherein the image of the object recorded by the camera and the image of the object recorded by the second camera are displayed within a window displayed on the display.3. The method of claim 1 , wherein the mobile computing device is a smart phone.4. The method of claim 1 , wherein the mobile computing device provides data that controls the actuator.5. The method of claim 1 , further comprising: determining an initial azimuth of the housing.6. The method of claim 1 , further comprising: determining a plurality of positions of the actuator relative to a neutral position.7. The method of claim 1 , further comprising estimating a speed of the second camera by the processor.8. The method of claim 7 , wherein an error angle ...

Подробнее
12-09-2013 дата публикации

METHOD AND APPARATUS FOR AUTOMATED PLANT NECROSIS

Номер: US20130235183A1
Автор: Redden Lee Kamp
Принадлежит: BLUE RIVER TECHNOLOGY, INC.

A method of distinguishing individual plants within a row of plants, including directing radiation at the row of plants at an angle selected to illuminate a portion of the plant and cast a shadow at the plant center, collecting an image from the radiation reflected off of two or more contiguous plants with a detector, identifying a continuous foreground region indicative of a plant within the image, identifying points of interest within the region, classifying the points of interest as plant centers and non-plant centers, and segmenting the region into sub-regions, each sub-region encompassing a single point of interest classified as a plant center. 1. A method of distinguishing individual plants within a row of plants extending from a ground , comprising:illuminating two contiguous plants within the row at an angle between the ground and a normal vector to the ground with a light;collecting an image of the two plants with a camera oriented with a view plane substantially parallel to the ground;identifying a continuous region indicative of a plant;identifying points of interest within the region based on gradients identified within the region, wherein points of interest comprise portions of the region having gradients greater than a gradient threshold;selecting a first and a second point of interest; andsegmenting the region into a first and a second sub-region encompassing the first and the second selected points of interest, respectively.2. The method of claim 1 , further comprising segmenting the image into a background and a foreground claim 1 , wherein identifying a continuous region indicative of a plant comprises identifying a continuous region indicative of a plant within the foreground.3. The method of claim 1 , further comprising classifying the points of interest as plant centers and non-plant centers with a machine learning algorithm claim 1 , wherein selecting a first and a second point of interest comprises selecting a first and a second point of ...

Подробнее
12-09-2013 дата публикации

Multifunctional Bispectral Imaging Method and Device

Номер: US20130235211A1
Принадлежит: THALES

A multifunctional device and method for bispectral imaging are provided. The device and method include acquiring a plurality of bispectral images (IBM), each bispectral image being the combination of two acquired images (I, 1) in two different spectral bands, and generating a plurality of images, each of which gives an impression of depth by combining the two acquired images (I, 1) and forming imaging information. The method includes simultaneously processing the plurality of bispectral images in order to generate, in addition to the imaging information, watch information and/or early threat information, comprising the following steps: searching for specific spectrum and time signatures, associated with a particular threat, in the plurality of bispectral images; and detecting a specific object in each bispectral image, and generating a time-tracking of the position of the object in the plurality of images in each spectral band, and the detecting and the tracking of the object forming the watch information. 19-. (canceled)10. A multifunctional bispectral imaging method comprising the steps of:{'sub': M', '1', '2, 'acquiring a plurality of bispectral images (IB), each bispectral image being the combination of two images (IM, IM) acquired in two different spectral bands;'}generating a plurality of images, each image giving an impression of depth by combining the two images acquired in the two different spectral bands, the plurality of images being imaging information; searching for specific spectrum and time signatures in the plurality of bispectral images, a particular spectral and time signature being associated with a particular threat; and', 'detecting a specific object in each bispectral image, and generating a time-tracking of the position of the object in the plurality of images in each spectral band, the detecting and the tracking of the object forming the watch information., 'processing, simultaneously, the plurality of bispectral images to generate, in ...

Подробнее
12-09-2013 дата публикации

OBJECT IDENTIFICATION SYSTEM AND METHOD

Номер: US20130236053A1
Принадлежит: TOSHIBA TEC KABUSHIKI KAISHA

According to embodiments, an object identification system is disclosed. The object identification system comprises a dictionary file comprising multiple records, each record including: an object identification code, and one or more standard images, wherein each standard image is related to one of the object identification codes. The object identification system further comprises a computation module configured to calculate a similarity by comparing an image data produced by an image sensor with the standard images in each record, and an identification module configured to identify one or more of the object identification codes based on the calculated similarity. The object identification system further comprises a production module configured to produce a graphical user interface that displays each of one or more standard images that are related to one of the object identification codes specified by a user.

Подробнее
12-09-2013 дата публикации

DETECTING APPARATUS OF HUMAN COMPONENT AND METHOD THEREOF

Номер: US20130236057A1
Автор: CHEN Maolin, JEON Moon-Sik
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Disclosed are an apparatus and a method of detecting a human component from an input image. The apparatus includes a training database (DB) to store positive and negative samples of a human component, an image processor to calculate a difference image for the input image, a sub-window processor to extract a feature population from a difference image that is calculated by the image processor for the positive and negative samples of a predetermined human component stored in the training DB, and a human classifier to detect a human component corresponding to a human component model using the human component model that is learned from the feature population. 1. An image forming system comprising:a photographing apparatus to photograph an image of an object;a detecting apparatus to detect a region of the object corresponding to an object model from a difference image of the object using the object model, wherein the object model is learned from a feature population extracted from a difference image of positive and negative samples of the object;a status parameter computing apparatus to compute status parameters based on the detected object region where the object exists in the image of the object, so as to enable the object to be in the central region of the image;a controller to receive the status parameters and adjust the status of the image;a storing apparatus to store the image of the object; anda display apparatus to display the image of the object.2. The system of further comprising:a marking apparatus to provide the detecting apparatus with a region of the object which is manually marked on the image by a user.3. The system of claim 1 , wherein the controller controls at least one of operations of rotation claim 1 , tilting claim 1 , zooming claim 1 , and selecting a focus region according to the status parameters.4. The system of claim 1 , wherein the controller selects a new region where the object exists as a focus region.5. The system of claim 1 , wherein the ...

Подробнее
12-09-2013 дата публикации

System And Process For Detecting, Tracking And Counting Human Objects Of Interest

Номер: US20130236058A1
Принадлежит: ShopperTrak RCT Corporation

A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period.

Подробнее
19-09-2013 дата публикации

PERIPHERAL INFORMATION GENERATING APPARATUS, CONVEYANCE, PERIPHERAL INFORMATION GENERATING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20130243247A1
Принадлежит: SHARP KABUSHIKI KAISHA

The peripheral information generating apparatus includes (i) a projection section for forming a projection pattern L, which at least partially has a continuous profile, on a road by irradiating the road with light, (ii) an image capturing section, and (iii) an image analyzing section for generating peripheral information, which indicates a peripheral situation of the peripheral information generating apparatus and of the road, by analyzing the projection pattern. 1. A peripheral information generating apparatus comprising:a projection section for forming a projection pattern, which at least partially has a continuous profile, on a surface of a projection target by irradiating the projection target with light;an image capturing section for capturing an image of the projection pattern formed on the surface; andan image analyzing section for generating peripheral information, which indicates a peripheral situation of said peripheral information generating apparatus and of the projection target, by analyzing the projection pattern in the image captured by the image capturing section.2. The peripheral information generating apparatus as set forth in claim 1 , wherein:in a case where at least one of (i) an event in the projection target and (ii) the projection pattern moves, the image analyzing section generates the peripheral information for each of all events that pass through a contour of the projection pattern.3. The peripheral information generating apparatus as set forth in claim 1 , wherein:the projection pattern intersects, in at least one location, with an arbitrary straight line passing through a point which exists within the contour of the projection pattern.4. The peripheral information generating apparatus as set forth in claim 1 , wherein:the projection pattern has at least one of a lattice shape and a closed continuous profile.5. The peripheral information generating apparatus as set forth in claim 1 , wherein:the projection section emits the light so that ...

Подробнее
19-09-2013 дата публикации

LOITERING DETECTION IN A VIDEO SURVEILLANCE SYSTEM

Номер: US20130243252A1
Принадлежит: BEHAVIORAL RECOGNITION SYSTEMS, INC.

A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to learn patterns of behavior consistent with a person loitering and generate alerts for same. Upon receiving information of a foreground object remaining in a scene over a threshold period of time, a loitering detection module evaluates the whether the object trajectory corresponds to a random walk. Upon determining that the trajectory does correspond, the loitering detection module generates a loitering alert. 1. A method for detecting loitering behavior of objects depicted in a scene captured by a video camera , the method comprising:receiving a trajectory for an object in the scene, wherein the object has been in the scene for a time period, and wherein the trajectory tracks a two-dimensional (2D) path of the object relative to a series of video frames in which the object is depicted; andupon determining that the time period that the object has been in the scene is greater than a threshold time period and determining that the trajectory corresponds to a random walk, generating a loitering alert.2. The method of claim 1 , wherein the object trajectory corresponds to a random walk if D<√{square root over (2N)} claim 1 , where D is a distance from a starting point in the trajectory to a final point in the trajectory and N is a total number of steps in the trajectory.3. The method of claim 2 , wherein distance D and steps are determined relative to distances between a set of pixels corresponding to the object in the series of video frames claim 2 , and wherein the steps are determined by a centroid position of the object in each video frame.4. The method of claim 1 , wherein the trajectory corresponds to a random walk if D<√{square root over (2N)} claim 1 , and D′<√{square root over (2N′)} claim 1 , where D is a distance from a starting point in the trajectory ...

Подробнее
26-09-2013 дата публикации

Method and a device for objects counting

Номер: US20130251197A1
Принадлежит: NEC China Co Ltd

A method and a device for objects counting in image processing includes acquiring the depth image of any one frame; detecting objects according to the depth image; associating the identical object in different frames to form a trajectory; and determining the number of objects according to the number of trajectories. The devices include an acquisition module for acquiring the depth image of any one frame; a detection module for detecting objects according to the depth image; an association module for associating the identical object in different frames to form a trajectory; a determining module for determining the number of objects according to the number of trajectories. The objects are detected according to the depth image. The identical object in different frames is associated to form a trajectory and the number of objects is determined according to the number of trajectories.

Подробнее
03-10-2013 дата публикации

COMMODITY MANAGEMENT APPARATUS

Номер: US20130259305A1
Автор: Harada Yukio
Принадлежит: TOSHIBA TEC KABUSHIKI KAISHA

A commodity management apparatus comprises an extraction section configured to extract a plurality of objects shown in a display state image which is obtained by color-photographing a display space in which a plurality of commodities are displayed, wherein the plurality of commodities respectively have labels which indicate objects of different colors according to the timing at which labels are respectively adhered to the commodities, a discrimination section configured to discriminate color of each object extracted by the extraction section, and a counting section configured to respectively count the plurality of objects extracted by the extraction section according to each color discriminated by the discrimination section. 1. A commodity management apparatus , comprising:an extraction section configured to extract a plurality of objects shown in a display state image which is obtained by color-photographing a display space in which a plurality of commodities are displayed, the plurality of commodities respectively having labels which indicate objects of different colors according to each timing period at which labels are respectively adhered to the commodities;a discrimination section configured to discriminate color of each object extracted by the extraction section; anda counting section configured to respectively count the plurality of objects extracted by the extraction section according to each color discriminated by the discrimination section.2. The commodity management apparatus according to claim 1 , further including a generation section configured to generate a list image in which the number of counts counted by the counting section for each color of the objects is shown in association with the color of the objects or the timing periods claim 1 , at which labels are respectively adhered to the commodities claim 1 , relevant to the color of the objects.3. The commodity management apparatus according to further including a first memory configured to store ...

Подробнее
03-10-2013 дата публикации

OBJECT DETECTION METHOD, OBJECT DETECTION APPARATUS, AND PROGRAM

Номер: US20130259310A1
Принадлежит:

An object detection method includes an image acquisition step of acquiring an image including a target object, a layer image generation step of generating a plurality of layer images by one or both of enlarging and reducing the image at a plurality of different scales, a first detection step of detecting a region of at least a part of the target object as a first detected region from each of the layer images, a selection step of selecting at least one of the layer images based on the detected first detected region and learning data learned in advance, a second detection step of detecting a region of at least a part of the target object in the selected layer image as a second detected region, and an integration step of integrating a detection result detected in the first detection step and a detection result detected in the second detection step. 1. An object detection method comprising:an image acquisition step of acquiring an image including a target object;a layer image generation step of generating a plurality of layer images by one or both of enlarging and reducing the image at a plurality of different scales;a first detection step of detecting a region of at least apart of the target object as a first detected region from each of the layer images;a selection step of selecting at least one of the layer images based on the detected first detected region and learning data learned in advance;a second detection step of detecting a region of at least a part of the target object in the layer image selected in the selection step as a second detected region; andan integration step of integrating a detection result detected in the first detection step and a detection result detected in the second detection step.2. The object detection method according to claim 1 , wherein the integration step includes determining a result of the integration step as the region of the target object based on an overlap degree between the first detected region and the second detected region. ...

Подробнее
10-10-2013 дата публикации

Keyframe Selection for Robust Video-Based Structure from Motion

Номер: US20130266180A1
Автор: Hailin Jin
Принадлежит: Adobe Systems Inc

An adaptive technique is described for iteratively selecting and reconstructing keyframes to fully cover an image sequence that may, for example, be used in an adaptive reconstruction algorithm implemented by a structure from motion (SFM) technique. A next keyframe to process may be determined according to an adaptive keyframe selection technique. The determined keyframe may be reconstructed and added to the current reconstruction. A global optimization may be performed on the current reconstruction. One or more outlier points may be determined and removed from the reconstruction. One or more inlier points may be determined and recovered. If the number of inlier points that were added exceeds a threshold, then global optimization may again be performed. If the current reconstruction is a projective construction, self-calibration may be performed to upgrade the projective reconstruction to a Euclidean reconstruction.

Подробнее
17-10-2013 дата публикации

TARGET IDENTIFICATION SYSTEM TARGET IDENTIFICATION SERVER AND TARGET IDENTIFICATION TERMINAL

Номер: US20130272569A1
Принадлежит: Hitachi, Ltd.

A computer and a terminal apparatus retain position information about targets. The terminal apparatus includes: a capturing portion that captures an image of the target; a position information acquisition portion that acquires information about a position to capture the target; an orientation information acquisition portion that acquires information about an orientation to capture the target; and a communication portion that transmits the image, the position information, and the orientation information to the computer. The computer identifies at least one first target candidate as a candidate for the captured target from the targets based on the position information about the targets, the acquired position information, and the acquired orientation information. The computer identifies at least one second target candidate from at least the one first target candidate based on a distance from the terminal apparatus to the captured target. 1. A target identification system comprising:a computer connected to a network; anda terminal apparatus connected to the network,wherein the computer includes an interface connected to the network, a processor connected to the interface, and a storage apparatus connected to the processor;wherein the storage apparatus retains position information about a plurality of targets;wherein the terminal apparatus includes:a capturing portion that captures an image of the target;a position information acquisition portion that acquires information about a position to capture an image of the target;an orientation information acquisition portion that acquires information about an orientation to capture an image of the target; anda communication portion that transmits an image captured by the capturing portion, position information acquired by the position information acquisition portion, and orientation information acquired by the orientation information acquisition portion to the computer via the network;wherein the computer identifies at least ...

Подробнее
24-10-2013 дата публикации

IMAGE RECOGINITION DEVICE, IMAGE RECOGNITION METHOD, AND IMAGE RECOGNITION PROGRAM

Номер: US20130279746A1
Принадлежит:

An image recognition device includes an image acquiring unit configured to acquire an image, and an object recognition unit configured to extract feature points from the image acquired by the image acquiring unit, to detect coordinates of the extracted feature points in a three-dimensional spatial coordinate system, and to determine a raster scan region which is used to recognize a target object based on the detection result. 1. An image recognition device comprising:an image acquiring unit configured to acquire an image; andan object recognition unit configured to extract feature points from the image acquired by the image acquiring unit, to detect coordinates of the extracted feature points in a three-dimensional spatial coordinate system, and to determine a raster scan region which is used to recognize a target object based on the detection result.2. The image recognition device according to claim 1 , wherein the object recognition unit is configured to create virtual windows based on information on distances at the coordinates in the three-dimensional spatial coordinate system detected for the extracted feature points and information on positions other than the distances of the extracted feature points claim 1 , to consolidate the created virtual windows claim 1 , and to prepare a raster scan region.3. The image recognition device according to claim 2 , wherein the object recognition unit is configured to determine a region of a virtual window claim 2 , which is obtained as the final consolidation result of the virtual windows claim 2 , as the raster scan region.4. The image recognition device according to claim 2 , wherein the object recognition unit is configured to set sizes of the virtual windows based on the information on the distances in the three-dimensional spatial coordinate system detected for the extracted feature points claim 2 , to set the positions of the virtual windows based on the information on the positions other than the distances of the ...

Подробнее
24-10-2013 дата публикации

Distance-Varying Illumination and Imaging Techniques for Depth Mapping

Номер: US20130279753A1
Принадлежит: PRIMESENSE LTD

A method for mapping includes projecting a pattern onto an object ( 28 ) via an astigmatic optical element ( 38 ) having different, respective focal lengths in different meridional planes ( 54, 56 ) of the element. An image of the pattern on the object is captured and processed so as to derive a three-dimensional (3D) map of the object responsively to the different focal lengths.

Подробнее
24-10-2013 дата публикации

Image Capture and Identification System and Process

Номер: US20130279754A1
Принадлежит: NANT HOLDINGS IP LLC

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.

Подробнее
24-10-2013 дата публикации

Information processing system, information processing method, and information processing program

Номер: US20130279755A1
Автор: Shuji Senda
Принадлежит: NEC Corp

The present invention includes: a database which stores a position on a map and feature information in an image which can be taken by an imaging device at the position, to be associated with each other; an extraction means for extracting the feature information from the image; an estimation means for estimating the position at which the imaging device exists on a map on the basis of the extracted feature information referring to the database; a display means for displaying an estimated current position of the imaging device; a determination means for determining whether or not an imaging direction of the imaging device is varied by a predetermined amount from a direction in which the image from which the feature information is extracted is taken during the imaging; and a control means for controlling the extraction means so that new feature information is extracted, upon determining that the direction is varied by the predetermined amount; wherein the estimation means combines the new feature information and the extracted feature information and re-estimates the position on the map at which the imaging device exists when the new feature information is extracted.

Подробнее
31-10-2013 дата публикации

Foreground subject detection

Номер: US20130287257A1
Принадлежит: Microsoft Corp

Classifying pixels in a digital image includes receiving a primary image from a primary image sensor. The primary image includes a plurality of primary pixels. Depth information from a depth sensor is also received. The depth information and the primary image are cooperatively used to identify whether a primary pixel images a foreground subject or a background subject.

Подробнее
31-10-2013 дата публикации

IMAGE PROCESSING DEVICE, IMAGE CAPTURING DEVICE, AND IMAGE PROCESSING METHOD

Номер: US20130287259A1
Автор: Ishii Yasunori
Принадлежит:

An image processing device for tracking a subject included in a first image, in a second image captured after the first image includes: a segmentation unit that divides the first image into a plurality of segments based on similarity in pixel values; an indication unit that indicates a position of the subject in the first image; a region setting unit that sets, as a target region, a region including at least an indicated segment which is a segment at the indicated position; an extraction unit that extracts a feature amount from the target region; and a tracking unit that tracks the subject by searching the second image for a region similar to the target region using the extracted feature amount. 112-. (canceled)13. An image processing device for tracking a subject included in a first image , in a second image captured after the first image , the image processing device comprising:a segmentation unit configured to divide the first image into a plurality of segments based on similarity in pixel values;an indication unit configured to indicate a position of the subject in the first image;a region setting unit configured to set, as a target region, a segment group including an indicated segment which is a segment at the indicated position;an extraction unit configured to extract a feature amount from the target region; anda tracking unit configured to track the subject by searching the second image for a region similar to the target region using the extracted feature amount, andwherein the region setting unit is configured to set a segment group forming one continuous region, as the target region, the segment group including the indicated segment and a similar segment having a value which indicates image similarity to the indicated segment and is greater than a threshold.14. The processing device according to claim 13 ,wherein the segmentation unit is configured to divide the first image into a plurality of segments based on similarity in colors.15. The processing ...

Подробнее
07-11-2013 дата публикации

Optimal gradient pursuit for image alignment

Номер: US20130294686A1
Принадлежит: General Electric Co

A method for image alignment is disclosed. In one embodiment, the method includes acquiring a facial image of a person and using a discriminative face alignment model to fit a generic facial mesh to the facial image to facilitate locating of facial features. The discriminative face alignment model may include a generative shape model component and a discriminative appearance model component. Further, the discriminative appearance model component may have been trained to estimate a score function that minimizes the angle between a gradient direction and a vector pointing toward a ground-truth shape parameter. Additional methods, systems, and articles of manufacture are also disclosed.

Подробнее
14-11-2013 дата публикации

VIDEO ANALYSIS

Номер: US20130301876A1
Автор: HUGOSSON Fredrik
Принадлежит: AXIS AB

A method () and an object analyzer () for analyzing objects in images captured by a monitoring camera () uses a first and a second sequence of image frames, wherein the first sequence of image frames covers a first image area () and has a first image resolution, and the second sequence of image frames covers a second image area () located within the first image area () and has a second image resolution higher than the first image resolution. A common set of object masks is provided wherein object masks of objects () that are identified as being present in both image areas are merged. 1. A method of analyzing objects in images captured by a monitoring camera , comprising the steps of:receiving a first sequence of image frames having a first image resolution and covering a first image areareceiving a second sequence of image frames having a second image resolution higher than the first image resolution and covering a second image area being a portion of the first image area,detecting objects present in the first sequence of image framesdetecting objects present in the second sequence of image frames,providing a first set of object masks for objects detected in the first sequence of image frames,providing a second set of object masks for objects detected in the second sequence of image frames,identifying an object present in the first and the second sequence of image frames by detecting a first object mask in the first set of object masks at least partly overlapping a second object mask in the second set of object masks,merging the first and the second object mask into a third object mask by including data from the first object mask for parts present only in the first image area, and data from the second object mask for parts present in the second image area, and the first set of object masks excluding the first object mask,', 'the second set of object masks excluding the second object mask, and', 'the third object mask., 'providing a third set of object masks ...

Подробнее
14-11-2013 дата публикации

OPERATING A COMPUTING DEVICE BY DETECTING ROUNDED OBJECTS IN AN IMAGE

Номер: US20130301879A1
Автор: Polo Fabrizio
Принадлежит: Orbotix, Inc.

A method is disclosed for operating a computing device. One or more images of a scene captured by an image capturing device of the computing device is processed. The scene includes an object of interest that is in motion and that has a rounded shape. The one or more images are processed by detecting a rounded object that corresponds to the object of interest. Position information is determined based on a relative position of the rounded object in the one or more images. One or more processes are implemented that utilize the position information determined from the relative position of the rounded object. 1. A method for operating a computing device , the method being performed by one or more processors and comprising:processing one or more images of a scene captured by an image capturing device of the computing device, the scene including an object of interest that is in motion and having a characteristic rounded shape, wherein processing the one or more images includes detecting a rounded object that corresponds to the object of interest;determining position information based on a relative position of the rounded object in the one or more images; andimplementing one or more processes that utilize the position information determined from the relative position of the rounded object.2. The method of claim 1 , wherein implementing the one or more processes includes displaying claim 1 , on a display of the computing device claim 1 , a representation of the object of interest based on the determined position information.3. The method of claim 1 , wherein the object of interest corresponds to a self-propelled device having a rounded structural feature.4. The method of claim 3 , wherein the self-propelled device is spherical.5. The method of claim 1 , wherein detecting the rounded object includes applying a filter to each of the one or more images.6. The method of claim 5 , wherein the filter is a grayscale filter claim 5 , and wherein applying the filter includes ...

Подробнее
21-11-2013 дата публикации

Method and system for calculating the geo-location of a personal device

Номер: US20130308822A1
Принадлежит: Telefonica SA

The method comprises performing said calculation by using data provided by an image recognition process which identifies at least one geo-referenced image of an object located in the surroundings of said personal device. The system is arranged for implementing the method of the present invention.

Подробнее
28-11-2013 дата публикации

Method of Improving Orientation and Color Balance of Digital Images Using Face Detection Information

Номер: US20130314525A1

A method of generating one or more new spatial and chromatic variation digital images uses an original digitally-acquired image which including a face or portions of a face. A group of pixels that correspond to a face within the original digitally-acquired image is identified. A portion of the original image is selected to include the group of pixels. Values of pixels of one or more new images based on the selected portion are automatically generated, or an option to generate them is provided, in a manner which always includes the face within the one or more new images. Such method may be implemented to automatically establish the correct orientation and color balance of an image. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images. 1. A method of digital image processing using face detection for achieving desired luminance parameters for a face , comprising using a digital image acquisition device or external image processing device , or a combination thereof , that includes a processor that is programmed to perform the method , wherein the method comprises:identifying a group of pixels that correspond to a face within a digital image;identifying one or more sub-groups of pixels that correspond to one or more facial features of the face including at least one sub-group of pixels that substantially comprise one or more skin tones;determining initial luminance values of one or more luminance parameters of said pixels of the one or more sub-groups of pixels that substantially comprise one or more skin tones;determining at least one initial luminance parameter based on the initial luminance values; anddetermining adjusted values of the one or more luminance parameters of the pixels of the one or more sub-groups of pixels, including said at least one sub-group of pixels that substantially comprise one or more skin tones, based on a comparison of the initial luminance parameter with ...

Подробнее
05-12-2013 дата публикации

METHOD AND APPARATUS FOR RECOGNIZING OBJECT MATERIAL

Номер: US20130321620A1

Provided is an apparatus for recognizing object material. The apparatus includes: an imaging camera unit for capturing spatial image including various objects in a space; an exploring radar unit sending an incident wave to the objects and receiving spatial radar information including a surface reflected wave from a surface of each of the objects and an internal reflected wave from the inside of each of the objects; an information storage unit for storing reference physical property information corresponding to a material of each object; and a material recognition processor recognizing material information of each object by using the reference physical property information stored in the information storage unit, the spatial image provided by the imaging camera unit, and the spatial radar information provided by the exploring radar unit. 1. An apparatus for recognizing object material , the apparatus comprising:an imaging camera unit for capturing spatial image including various objects in a space;an exploring radar unit sending an incident wave to the objects, and receiving spatial radar information including a surface reflected wave from a surface of each of the objects and an internal reflected wave from the inside of each of the objects;an information storage unit for storing reference physical property information corresponding to a material of each object; anda material recognition processor recognizing material information of each object by using the reference physical property information stored in the information storage unit, the spatial image provided by the imaging camera unit, and the spatial radar information provided by the exploring radar unit.2. The apparatus of claim 1 , wherein the incident wave is one of an electronic wave and a sound wave.3. The apparatus of claim 1 , further comprising a matching processor for matching spatial region corresponding to each other between the spatial image and the spatial radar information.4. The apparatus of claim ...

Подробнее
05-12-2013 дата публикации

SITUATION RECOGNITION APPARATUS AND METHOD USING OBJECT ENERGY INFORMATION

Номер: US20130322690A1

A situation recognition apparatus and method analyzes an image to convert a position and motion change rate of an object in a space and an object number change rate into energy information, and then changes the energy information into entropy in connection with an entropy theory of a measurement theory of a disorder within a space. Accordingly, the situation recognition apparatus and method recognizes an abnormal situation in the space and issues a warning for the recognized abnormal situation. Therefore, the situation recognition apparatus and method recognizes an abnormal situation within a space, thereby effectively preventing or perceiving a real-time incident at an early stage. 1. A situation recognition apparatus using object energy information , comprising:an image receiving unit configured to receive a taken image;an object detection unit configured to detect an object by analyzing the received image;an object position information extraction unit configured to extract position information with the image for the detected object;an object motion information extraction unit configured to extract motion information within the image for the detected object;an object number change rate measurement unit configured to measure an object number change rate within the image for the detected object;an entropy calculation unit configured to convert a position change rate of the object, measured based on the position information, a motion change rate of the object, measured based on the motion information, and the object number change rate into energy information, and measure entropy of the converted energy information; anda situation recognition unit configured to recognize a situation within a space where the image was taken by associating a change rate of the entropy measured within the image with a risk policy.2. The situation recognition apparatus of claim 1 , wherein the object detection unit detects multiple objects claim 1 , andthe entropy calculation unit ...

Подробнее
12-12-2013 дата публикации

SYSTEM AND METHOD FOR DETECTING AND TRACKING A CURVILINEAR OBJECT IN A THREE-DIMENSIONAL SPACE

Номер: US20130329038A1
Принадлежит: THE JOHNS HOPKINS UNIVERSITY

A system for detecting and tracking a curvilinear object in a three-dimensional space includes an image acquisition system including a video camera arranged to acquire a video image of the curvilinear object and output a corresponding video signal, the video image comprising a plurality n of image frames each at a respective time t, where i=1, 2, . . . , n; and a data processing system adapted to communicate with the image acquisition system to receive the video signal. The data processing system is configured to determine a position, orientation and shape of the curvilinear object in the three-dimensional space at each time tby forming a computational model of the curvilinear object at each time tsuch that a projection of the computation model of the curvilinear object at each time ti onto a corresponding frame of the plurality of image frames of the video image matches a curvilinear image in the frame to a predetermined accuracy to thereby detect and track the curvilinear object from time tto time t. 1. A system for detecting and tracking a curvilinear object in a three-dimensional space , comprising:{'sub': 'i', 'an image acquisition system comprising a video camera arranged to acquire a video image of said curvilinear object and output a corresponding video signal, said video image comprising a plurality n of image frames each at a respective time t, where i=1, 2, . . . , n; and'}a data processing system adapted to communicate with said image acquisition system to receive said video signal,{'sub': i', 'i', 'i', '1', 'n, 'wherein said data processing system is configured to determine a position, orientation and shape of said curvilinear object in said three-dimensional space at each time tby forming a computational model of said curvilinear object at each time tsuch that a projection of said computation model of said curvilinear object at each time tonto a corresponding frame of said plurality of image frames of said video image matches a curvilinear image in ...

Подробнее
12-12-2013 дата публикации

INFORMATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING THE SAME

Номер: US20130330006A1
Автор: Kuboyama Hideo
Принадлежит: CANON KABUSHIKI KAISHA

An information processing apparatus detects a moving member that moves in a background area and that includes an object other than a recognition target. The apparatus sets a partial area as a background undetermined area if the moving member is present in the background area and sets a partial area as a background determined area if it is regarded that the recognition target is not present in the background area in each of the partial areas set as the background undetermined area. The apparatus recognizes an operation caused by the recognition target that moves in the background determined area. 1. An information processing apparatus that can recognize an operation caused by a recognition target , comprising:a detection unit configured to detect a moving member moving in a background area and including an object other than the recognition target;a setting unit configured to set a partial area of partial areas constituting the background area as a background undetermined area if the moving member detected by the detection unit is present in the background area, and further configured to set a partial area as a background determined area if it is regarded that the recognition target is not present in the background area in each of the partial areas set as the background undetermined area; anda recognition unit configured to recognize an operation caused by the recognition target that moves in the background determined area.2. The information processing apparatus according to claim 1 , wherein the setting unit is configured to regard the background undetermined area as an area in which the recognition target is not present in the background if the moving member including an object other than the recognition target is stopped or if the moving member including an object other than the recognition target has disappeared.3. The information processing apparatus according to claim 1 , further comprising:a display control unit configured to display a display object in the ...

Подробнее
26-12-2013 дата публикации

COMPRESSIVE SENSING BASED BIO-INSPIRED SHAPE FEATURE DETECTION CMOS IMAGER

Номер: US20130342681A1
Автор: DUONG Tuan A.
Принадлежит:

A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation. 1. A compressive sensing-based bio-inspired shape feature detection imager circuit comprising a plurality of circuits operatively coupled to one another , the plurality of circuits comprising:an active pixel sensor array configured to collect an image in an original space and generate an analog representation of the collected image;a bio-inspired shape feature compressive sensing projection matrix circuit configured to project the analog representation of the collected image simultaneously onto each target bio-inspired feature of a set of target bio-inspired features and map the projected image from an original space to a compressive sensing space, and generate i) correlation data of the projected image in the compressive sensing space to the set of target bio-inspired features, and ii) reference position data in the original space for the collected image;a target detection and location circuit configured to process the correlation data and the reference position data to identify a potential target in the collected image from amongst the set of target bio-inspired features;a compressive sensing sampling data array circuit configured to process the projected image in the compressive sensing space to recover a digital representation of the collected image in the original space, and generate position and identity information of an identified potential target within the recovered collected image in the original space; andan adaptive target extraction circuit configured to track the identified potential target in a next collected image and extract a corresponding new feature from the next collected image to add to the set of target bio-inspired features.2. The imager circuit of claim 1 , wherein the analog representation of the collected image ...

Подробнее
02-01-2014 дата публикации

SYSTEM OF A DATA TRANSMISSION AND ELECTRICAL APPARATUS

Номер: US20140003656A1
Автор: LIN Kang-Wen
Принадлежит: Quanta Computer Inc.

A system for data transmission includes a first electrical device displaying video data, a second electrical device having an image capture unit for taking pictures for the first electrical device to generate a first output image, and a server storing electrical device data and pre-stored multimedia data, recognizing the video data displayed on the first electrical device according to the first output image, and transmitting the video data to the second electrical device. 1. A data transmission system , comprising:a first electrical device, displaying a video data;a second electrical device, having an image capture unit for taking pictures for the first electrical device to generate a first output image; anda server, storing a plurality of electrical device data and pre-stored multimedia data bases, recognizing the first electrical device and the video data displayed on the first electrical device according to the first output image, and transmitting the video data to the second electrical device.2. The data transmission system of claim 1 , wherein the server searches the electrical device data according to the first output image for determining whether the first electrical device is registered in the server claim 1 , if the first electrical device is registered in the server and the video data is stored in the pre-stored multimedia data bases claim 1 , the server transmits the video data from the pre-stored multimedia data base to the second electrical device.3. The data transmission system of claim 2 , wherein if the first electrical device is not registered in the server claim 2 , the server searches the pre-stored multimedia data bases according to the first output image for determining whether the pre-stored multimedia data bases comprise a multimedia data as the same as the video data displayed on the first electrical device claim 2 , and if the pre-stored multimedia data bases comprise the multimedia data as the same as the video data displayed on the first ...

Подробнее
02-01-2014 дата публикации

SETTING APPARATUS AND SETTING METHOD

Номер: US20140003657A1
Автор: Funagi Tetsuhiro
Принадлежит:

A setting apparatus for making a setting to detect that a moving object in an image has passed a line set in the image accepts designation of one of a position and a partial region in an object region in the image, and makes a setting to detect that the designated position or partial region in the object region of the moving object in the image has passed the line set in the image. 1. A setting apparatus for making a setting to detect that a moving object in an image has passed a line set in the image , comprising:an acceptance unit configured to accept designation of one of a position and a partial region in an object region in the image; anda setting unit configured to make a setting to detect that one of the position and the partial region designated by the designation in the object region of the moving object in the image has passed the line set in the image.2. The apparatus according to claim 1 , whereinsaid acceptance unit accepts designation of one of a plurality of positions in a region having a predetermined shape.3. The apparatus according to claim 1 , whereinsaid acceptance unit accepts designation of one of vertices of an object region having a polygonal shape.4. The apparatus according to claim 1 , further comprisinga display unit configured to display, on a display screen for displaying an image, a display element indicating the object region of the moving object in the image and information indicating the position designated by the designation in the object region of the moving image in the image.5. The apparatus according to claim 1 , whereinsaid acceptance unit accepts designation of one of a position and a partial region in the object region in the image, and a direction in which the moving object in the image passes the line set in the image, andsaid setting unit makes a setting to detect that one of the position and the partial region designated by the designation in the object region of the moving object in the image has passed the line set in ...

Подробнее
02-01-2014 дата публикации

Nail Region Detection Method, Program, Storage Medium, and Nail Region Detection Device

Номер: US20140003665A1
Автор: Hoshino Kiyoshi
Принадлежит:

Disclosed is a nail region detection device including: colour camera; image data storage part; colour specification conversion plotting part for converting captured image data from the RGB colour specification system to the HLS colour specification system; a-threshold value setting part for setting and varying a threshold value along the X axis with respect to a first plotting region; second plotting part for replotting in a two-dimensional planar second graph, plotting data items which are equal to or greater than the threshold value and detecting the physical quantity or its ratio in a plurality of second plotting regions in the second graphregion; repeat control part for repeating the processing for replotting the data items; nail determination part for determining, as a nail region, a second plotting region in which the gradient of the amount of variation in the physical quantity or its ratio is equal to or less than a predetermined value. 1. A nail region detection method comprising at least:repeating, several times, a step of mapping a first plotting region, which is obtained by plotting and converting the image data of a hand image captured by a colour camera in a three-dimensional colour spatial first graph, in a two-dimensional planar second plotting region, while varying a threshold value in line with a value along one axis of the three-dimensional colour space;detecting at least one physical quantity or its ratio in the two-dimensional planar second plotting region at each of the mapping steps; anddetermining, as a nail region, the second plotting region, in which the gradient of the amount of variation, when the physical quantity or its ratio is varied at each of the mapping steps, is less than a predetermined vale.2. A nail region detection method comprising:a first step of converting data on an image containing the user's hand captured by a color camera from a colour specification system used by the color camera to a predetermined colour specification ...

Подробнее
02-01-2014 дата публикации

SENSING DEVICE AND METHOD USED FOR VIRTUAL GOLF SIMULATION APPARATUS

Номер: US20140003666A1
Принадлежит: GOLFZON CO., LTD.

Disclosed are a sensing device and method used for a virtual golf simulation apparatus in which an image acquired by an inexpensive camera having a relatively low resolution and velocity is analyzed to relatively accurately extract information on physical properties, such as velocity, direction and altitude angle, of a moving ball, and, particularly, in which the moving trajectory of a golf club is relatively accurately calculated from the acquired image to relatively accurately estimate spin of the ball and to reflect the estimated spin of the ball in golf simulation, thereby constituting a virtual golf simulation apparatus having high accuracy and reliability at low costs and further improving reality of virtual golf. 1. A sensing device used in a virtual golf simulation apparatus , comprising:a camera unit for acquiring a plurality of frame images of a ball hit by a user who swings at the ball; anda sensing processing unit comprising a ball image processing means for extracting the ball from each of the frame images to obtain three-dimensional coordinates of the ball and a club image processing means for extracting a moving object of interest from each of the acquired frame images to calculate a moving trajectory of a golf club head from the object of interest, thereby calculating information on physical properties of the moving ball.2. The sensing device according to claim 1 , wherein the club image processing means is configured to receive an image from the ball image processing means and process the received image from which the ball claim 1 , extracted from each image processed by the ball image processing means claim 1 , has been removed.3. The sensing device according to claim 1 , further comprising a hitting sensing means for processing each of the images received from the camera unit to sense whether hitting has been performed by the user claim 1 , thereby confirming impact time.4. The sensing device according to claim 3 , whereinthe ball image processing ...

Подробнее
02-01-2014 дата публикации

IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM

Номер: US20140003667A1
Принадлежит: SONY CORPORATION

There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user. 1. An image processing device comprising:an imaging unit configured to output an image generated by imaging a real space;a generating unit configured to generate an environment map representing a position of one or more objects existing in the real space based on the image and a feature data indicating a feature of appearance of each object; anda selecting unit configured to select at least one operable object recognized as being operable based on the object identification data as a candidate object being a possible operation target by a user.2. The image processing device according to claim 1 , further comprising:an image output unit configured to generate an output image including an indication of the position of the candidate object from the input image.3. The image processing device according to claim 1 , further comprising:a device recognizing unit configured to recognize at least one operable object based on the image and the object identification data,wherein the at least one operable object is recognized by the device recognizing unit.41. The image processing device according to claim claim 1 , further comprising:a user interface for allowing the user to specify an object ...

Подробнее
02-01-2014 дата публикации

VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND RECORDING MEDIUM

Номер: US20140003725A1
Автор: KAWANO Atsushi
Принадлежит:

A video processing apparatus includes: a first detection unit configured to detect a moving object from a movie; a second detection unit configured to detect an object having a predetermined shape from the movie; an extraction unit configured to extract a partial region of a region in which the second detection unit has detected the object having the predetermined shape in the movie; and a discrimination unit configured to discriminate whether the object detected by the second detection unit is a certain object depending on a ratio of a size of an overlapping region to a size of an extracted region extracted by the extraction unit, the overlapping region being a region where a region in which the first detection unit has detected the moving object in the movie and the extracted region overlap with each other. 1. A video processing apparatus comprising:a first detection unit configured to detect a moving object from a movie;a second detection unit configured to detect an object having a predetermined shape from the movie;an extraction unit configured to extract a partial region of a region in which the second detection unit has detected the object having the predetermined shape in the movie; anda discrimination unit configured to discriminate whether the object detected by the second detection unit is a certain object depending on a ratio of a size of an overlapping region to a size of an extracted region extracted by the extraction unit, the overlapping region being a region where a region in which the first detection unit has detected the moving object in the movie and the extracted region overlap with each other.2. The video processing apparatus according to whereinthe discrimination unit discriminates that the object, which has the predetermined shape and which has been detected by the second detection unit, as the certain object if the ratio of the size of the overlapping region to the size of the extracted region is a predetermined ratio or more.3. The video ...

Подробнее
27-02-2014 дата публикации

OBJECT DETECTION APPARATUS AND CONTROL METHOD THEREOF, AND STORAGE MEDIUM

Номер: US20140056473A1
Автор: Tojo Hiroshi
Принадлежит: CANON KABUSHIKI KAISHA

The object detection apparatus prevents or eliminates detection errors caused by changes of an object which frequently appears in a background. To this end, an object detection apparatus includes a detection unit which detects an object region by comparing an input video from a video input device and a background model, a selection unit which selects a region of a background object originally included in a video, a generation unit which generates background object feature information based on features included in the background object region, and a determination unit which determines whether or not the object region detected from the input video is a background object using the background object feature information. 1. An object detection apparatus comprising:a video input unit configured to input a video;an object region detection unit configured to detect an object region by comparing the input video and a background model;a selection unit configured to select a region of a background object originally included in a video;a generation unit configured to generate background object feature information based on features included in the background object region; anda determination unit configured to determine whether or not the object region detected from the input video is a background object using the background object feature information.2. The apparatus according to claim 1 , wherein the background object feature information is a statistical amount based on feature amounts extracted from the background object region.3. The apparatus according to claim 2 , wherein the feature amounts are feature amounts according to a scene to be applied.4. The apparatus according to claim 1 , wherein said selection unit selects the background object region based on a background object region selection rule corresponding to a scene to be applied.5. The apparatus according to claim 4 , wherein the background object region selection rule is based on duration since the object region ...

Подробнее
27-02-2014 дата публикации

VIDEO OBJECT FRAGMENTATION DETECTION AND MANAGEMENT

Номер: US20140056477A1
Принадлежит: CANON KABUSHIKI KAISHA

Disclosed herein are a computer-implemented method and a camera system for determining a current spatial representation for a detection in a current frame of an image sequence. The method derives an expected spatial representation () for the detection based on at least one previous frame, generates a spatial representation () of the detection, and extends the spatial representation () to obtain an extended spatial representation (), based on the expected spatial representation (). The method determines a similarity measure between the extended spatial representation () and the expected spatial representation (), and then determines the current spatial representation for the detection based on the similarity measure. 1deriving an expected spatial representation for said detection based on at least one previous frame;generating a spatial representation of said detection;extending said spatial representation to obtain an extended spatial representation, based on the expected spatial representation;determining a similarity measure between said extended spatial representation and said expected spatial representation; anddetermining the current spatial representation for the detection based on said similarity measure.. A computer-implemented method of determining a current spatial representation for a detection in a current frame of an image sequence, said method comprising the steps of: This application is a Continuation of U.S. application Ser. No. 12/645,611, filed on Dec. 23, 2009, and allowed on Aug. 1, 2013, which claims the right of priority under 35 U.S.C. §119 based on Australian Patent Application No. 2008261195 entitled “Video Object fragmentation detection and management”, filed on 23 Dec. 2008 in the name of Canon Kabushiki Kaisha, and Australian Patent Application No. 2008261196 entitled “Backdating object splitting”, filed on 23 Dec. 2008 in the name of Canon Kabushiki Kaisha, the entire contents of which are incorporated herein by reference.The present ...

Подробнее
06-03-2014 дата публикации

OBJECT DETECTION SYSTEM AND COMPUTER PROGRAM PRODUCT

Номер: US20140064556A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

According to an embodiment, an object detection system includes an obtaining unit, an estimating unit, a setting unit, a calculating unit, and a detecting unit. The obtaining unit is configured to obtain an image in which an object is captured. The estimating unit is configured to estimate a condition of the object. The setting unit is configured to set, in the image, a plurality of areas that have at least one of a relative positional relationship altered according to the condition and a shape altered according to the condition. The calculating unit is configured to calculate a feature value of an image covering the areas. The detecting unit is configured to compare the calculated feature value with a feature value of a predetermined registered object, and detect the registered object corresponding to the object. 1. An object detection system comprising:an obtaining unit configured to obtain an image in which an object is captured;an estimating unit configured to estimate a condition of the object;a setting unit configured to set, in the image, a plurality of areas that have at least one of a relative positional relationship altered according to the condition and a shape altered according to the condition;a calculating unit configured to calculate a feature value of an image covering each area; anda detecting unit configured to compare the calculated feature value with a feature value of a predetermined registered object, and detect the registered object corresponding to the object.2. The system according to claim 1 , wherein the calculating unit calculates brightness gradients between a plurality of areas claim 1 , and calculates the feature value that is the statistics of the calculated brightness gradients.3. The system according to claim 2 , wherein the calculating unit alters a method of calculating the statistics according to the condition.4. The system according to claim 2 , wherein the calculating unit further calculates color feature values of the areas ...

Подробнее
27-03-2014 дата публикации

INTERACTION SYSTEM AND MOTION DETECTION METHOD

Номер: US20140086449A1
Принадлежит: WISTRON CORP.

A motion detection method applied in an interaction system is provided. The method has the following steps of: retrieving a plurality of images; recognizing a target object from the retrieved images; calculating a first integral value of a position offset value of the target object along a first direction from the retrieved images; determining whether the calculated first integral value is larger than a first predetermined threshold value; and determining the target object as moving when the calculated first integral value is larger than the first predetermined threshold value. 1. A motion detection method applied in an interaction system , comprising:retrieving a plurality of images;recognizing a target object from the retrieved images;calculating a first integral value of a position offset value of the target object along a first direction from the retrieved images;determining whether the calculated first integral value is larger than a first predetermined threshold value; anddetermining the target object as moving when the calculated first integral value is larger than the first predetermined threshold value.2. The motion detection method as claimed in claim 1 , further comprising:generating a plurality of corresponding depth images from the retrieved images;determining a corresponding depth value of the target object according to the corresponding depth images; andtransforming the corresponding depth value to a position of the target object in a depth direction.3. The motion detection method as claimed in claim 2 , wherein the step of calculating the integral value further comprises:calculating a respective candidate position offset value of the target object in a horizontal direction, a vertical direction and the depth direction from the retrieved images;calculating a largest position offset value from the respective candidate position offset value corresponding to the horizontal direction, the vertical direction and the depth direction; anddetermining the ...

Подробнее
10-04-2014 дата публикации

MOTION-CONTROLLED ELECTRONIC DEVICE AND METHOD THEREFOR

Номер: US20140098995A1
Автор: Hung Kuo-Shu
Принадлежит: HON HAI PRECISION INDUSTRY CO., LTD.

An electronic device obtains a motion of a displaced object in two captured video frames utilizing phase correlation of the two frames. The electronic device identifies a magnitude of the motion and an area in a phase correlation surface corresponding to an area of the object, and accordingly determines if the motion is a qualified motion operable to trigger a gesture command of the electronic device. The phase correlation surface is obtained from the phase correlation of the two frames. 1. A motion-controlled electronic device comprising:a memory operable to store a sequence of video frames captured by an image capture device connected to the electronic device; and selecting two distinct frames from the sequence of video frames;', 'executing phase correlation on the two distinct frames;', 'storing normalized phase correlation values of the two distinct frames as a result of the phase correlation in a phase correlation image in the memory, wherein the phase correlation image records the normalized phase correlation values in a two-dimensional coordination space and is representative of a three-dimensional phase correlation surface in a three-dimensional coordination space spanned by axes of the two-dimensional coordination space and an axis representing scales of the normalized phase correlation values;', 'locating a peak of the three-dimensional phase correlation surface;', 'obtaining a motion of an object in the two distinct frames based on a location of the peak, wherein the peak reflects the motion;', 'identifying the motion as a qualified motion based on a magnitude of the motion and a cross-sectional area of the peak; and', 'activating a function of the electronic device in response to the qualified motion., 'a processor electrically connected to the memory, and operable to execute an operating method for the electronic device, wherein the method comprises2. The electronic device as claimed in claim 1 , wherein the motion starts from a center of the phase ...

Подробнее
10-04-2014 дата публикации

IMAGE DISPLAY APPARATUS AND IMAGE DISPLAY METHOD

Номер: US20140098996A1
Принадлежит: Panasonic Corporation

An image display apparatus is provided that can obtain a stable and easy to view detection frame and cut-out image in a captured image in which there is a possibility that a congested region and a non-congested region are mixed, such as an omnidirectional image. Congested region detecting section detects a congested region in a captured image by detecting a movement region of the captured image. Object detecting section detects images of targets in the captured image by performing pattern matching. Detection frame forming section forms a congested region frame that surrounds a congested region detected by congested region detecting section, and object detection frame that surround image of target detected by object detecting section. 110-. (canceled)11. An image display apparatus , comprising:a congested region detecting section that detects a congested region in a captured image by detecting a movement region of the captured image;an object detecting section that detects an image of a target in the captured image by performing pattern matching; anda detection frame forming section that forms a congested region frame that surrounds the congested region detected by the congested region detecting section, or an object detection frame that surround the image of the target detected by the object detecting section,wherein the captured image is an omnidirectional image, wherein, when a distance between the targets in a circumferential direction on the omnidirectional image is less than or equal to a predetermined value, the detection frame forming section forms the congested region frame, or when a distance between the targets in the circumferential direction on the omnidirectional image is more than the predetermined value, the detection frame forming section forms the object detection frame.12. The image display apparatus according to claim 11 , further comprising an image cutting-out section that cuts out an image of a region that is surrounded by the congested region ...

Подробнее
02-01-2020 дата публикации

OPHTHALMIC DIAGNOSIS SUPPORT APPARATUS AND OPHTHALMIC DIAGNOSIS SUPPORT METHOD

Номер: US20200000333A1
Принадлежит:

Provided is an ophthalmic diagnosis support apparatus which enables an operator to easily acquire a tomographic image suitable for detailed observation of a candidate lesion without spending time and effort to search for the candidate lesion. The ophthalmic diagnosis support apparatus includes an acquiring unit for acquiring a wide-range image of a fundus, a candidate lesion detection unit for detecting the candidate lesion on the fundus by analyzing the wide-range image, a calculating unit for determining a degree of abnormality of the candidate lesion based on a result of the detection of the candidate lesion, and an acquiring position setting unit for setting an acquiring position of the tomographic image of the fundus based on the degree of abnormality of the candidate lesion. 116-. (canceled)17. An ophthalmic apparatus comprising:an acquiring unit configured to acquire a volume image of a fundus;a candidate detecting unit configured to detect a candidate lesion in a retina by analyzing the volume image;a calculating unit configured to calculate a degree of abnormality of the candidate lesion based on a detection result for the candidate lesion;a first determining unit configured to determine coordinates of a first point corresponding to the detected candidate lesion in accordance with the calculated degree of abnormality;a characteristic region detecting unit configured to detect a characteristic region in the fundus by analyzing the volume image, the characteristic region including a macula lutea;a second determining unit configured to determine coordinates of a second point corresponding to the characteristic region detected by the characteristic region detecting unit anda setting unit configured to set a straight line corresponding to an imaging position to be used for acquiring a tomographic image based on the coordinates of the first point corresponding to the candidate lesion and the coordinates of the second point corresponding to the characteristic ...

Подробнее
01-01-2015 дата публикации

DEFORMABLE EXPRESSION DETECTOR

Номер: US20150003672A1
Принадлежит: QUALCOMM INCORPORATED

A method for deformable expression detection is disclosed. For each pixel in a preprocessed image, a sign of a first directional gradient component and a sign of a second directional gradient component are combined to produce a combined sign. Each combined sign is coded into a coded value. An expression in an input image is detected based on the coded values. 1. A method for deformable expression detection , comprising:combining, for each pixel in a preprocessed image, a sign of a first directional gradient component and a sign of a second directional gradient component to produce a combined sign;coding each combined sign into a coded value; anddetecting an expression in an input image based on the coded values.2. The method of claim 1 , further comprising preprocessing the input image to produce the preprocessed image claim 1 , comprising:aligning an input image based on a region of interest (ROI);cropping the ROI in the input image;scaling the ROI; andequalizing a histogram of the ROI.3. The method of claim 1 , wherein the directional gradient components are orthonormal.4. The method of claim 3 , wherein the directional gradient components are vertical and horizontal directional gradient components or 45-degree and 135-degree directional gradient components.5. The method of claim 1 , wherein the coding comprises coding each combined sign into a coded value based on the signs of the directional gradient components without determining the value of the magnitude of the directional gradient components.6. The method of claim 1 , wherein the expression comprises smiling claim 1 , blinking or anger.7. The method of claim 1 , wherein the detecting an expression comprises classifying a feature vector using a machine learning algorithm.8. The method of claim 7 , wherein the machine learning algorithm is a Support Vector Machines (SVM) algorithm claim 7 , a boosting algorithm or a K-Nearest Neighbors (KNN) algorithm.9. The method of claim 1 , further comprising updating a ...

Подробнее
02-01-2020 дата публикации

TOP-DOWN REFINEMENT IN LANE MARKING NAVIGATION

Номер: US20200003573A1
Принадлежит:

Systems and methods use cameras to provide autonomous navigation features. In one implementation, top-down refinement in lane marking navigation is provided. The system may include one or more memories storing instructions and one or more processors configured to execute the instructions to cause the system to receive from one or more cameras one or more images of a roadway in a vicinity of a vehicle, the roadway comprising a lane marking comprising a dashed line, update a model of the lane marking based on odometry of the one or more cameras relative to the roadway, refine the updated model of the lane marking based on an appearance of dashes derived from the received one or more images and a spacing between dashes derived from the received one or more images, and cause one or more navigational responses in the vehicle based on the refinement of the updated model. 1one or more memories storing instructions, and receive from one or more cameras one or more images of a roadway in a vicinity of a vehicle, the roadway comprising a lane marking comprising a dashed line,', 'update a model of the lane marking based on odometry of the one or more cameras relative to the roadway,', 'refine the updated model of the lane marking based on an appearance of dashes derived from the received one or more images and a spacing between dashes derived from the received one or more images, and', 'cause one or more navigational responses in the vehicle based on the refinement of the updated model., 'one or more processors configured to execute the instructions to cause the system to. A computer system comprising: This application claims the benefit of U.S. Provisional Patent Application No. 62/010,003, filed Jun. 10, 2014, and U.S. Provisional Patent Application No. 62/173,216, filed Jun. 9, 2015, the entireties of which are incorporated herein by reference.This relates generally to autonomous driving and/or driver assist technology and, more specifically, to systems and methods that use ...

Подробнее
13-01-2022 дата публикации

METHOD FOR TRACKING COMPONENTS

Номер: US20220012676A1
Автор: HOFFMANN Juergen
Принадлежит:

A method for tracking components includes running individual metal blanks with a respective individual part number through a press device to form respective individual components. The components are output on an outflow conveyor belt, removed from the outflow conveyor belt, and arranged in one of a plurality of containers that have a respective container identifier. The movement of the individual components from being output on the outflow conveyor belt until being arranged in the one of the plurality of containers is captured by a camera system. A link is created between respective individual part numbers and respective individual container identifiers and stored in a database. 111.-. (canceled)12. A method for tracking components , comprising the steps of:separating a metal strip to form individual metal blanks;providing each of the individual metal blanks with a respective individual part number;recording the individual part numbers in an information technology system;capturing the individual metal blanks by a capturing device by reading the individual part numbers before the individual metal blanks run through a press device;running the individual metal blanks through the press device, wherein the individual metal blanks are shaped by a shaping press of the press device to form respective individual components and outputting the individual components on an outflow conveyor belt;removing the individual components from the outflow conveyor belt and arranging each of the individual components in one of a plurality of containers wherein each of the plurality of containers has a respective container identifier;capturing a movement of the individual components from being output on the outflow conveyor belt until being arranged in the one of the plurality of containers by a camera system;creating a link between respective individual part numbers and respective individual container identifiers; andstoring the link in a database.13. The method according to claim 12 , ...

Подробнее
07-01-2021 дата публикации

Object Detection Model Training Method, Apparatus, and Device

Номер: US20210004625A1
Принадлежит: Huawei Technologies Co Ltd

In an object detection model training method, a classifier that has been trained in a first phase is duplicated to at least two copies, and in a training in a second phase, each classifier obtained through duplication is configured to detect to-be-detected objects with different sizes, and train an object detection model based on a detection result.

Подробнее
13-01-2022 дата публикации

QUANTITATIVE IMAGING FOR DETECTING HISTOPATHOLOGICALLY DEFINED PLAQUE EROSION NON-INVASIVELY

Номер: US20220012865A1
Принадлежит: Elucid Bioimaging Inc.

Systems and methods for analyzing pathologies utilizing quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analytics framework that identifies and quantify biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine underlying biology as an intermediary to assessing pathology provides many analytic and processing advantages over systems and methods that are configured to directly determine and characterize pathology from underlying imaging data. 1. A method for computer aided detection of erosion for a pathology using an enriched radiological dataset , the method comprising:receiving a radiological dataset for a patient, wherein the radiological dataset is obtained non-invasively;enriching the dataset by performing analyte measurement and/or classification of one or more of: (i) anatomic structure, (ii) shape or geometry or (iii) tissue characteristic, type or character, with objective validation for a set of analytes relevant to a pathology, wherein the analyte measurement and/or classification of anatomic structure, shape, or geometry and/or tissue characteristic, type, or character includes semantic segmentation to identify and classify regions of interest in the radiological dataset, wherein the regions of interest are identified with respect to cross-sections of a tubular structure in the radiological dataset;using a machine learned classification approach based on known ground truths to process the enriched dataset and determine a erosion for the pathology.2. The method of claim 1 , wherein enriching the dataset further includes spatial transformations of the dataset to accentuate biologically-significant spatial context.3. The method of claim 1 , wherein enriching the dataset includes both (i) semantic segmentation to ...

Подробнее
04-01-2018 дата публикации

METHOD AND APPARATUS FOR ON-BOARD SOFTWARE SELECTION OF BANDS OF A HYPERSPECTRAL SENSOR BASED ON SIMULATED SPECTRAL REPRESENTATION OF THE REGION OF INTEREST

Номер: US20180005011A1
Принадлежит: The Boeing Company

A system and method for surveying a region of interest with a mobile platform having a sensor having a plurality of narrow spectral bands spanning a contiguous frequency space is disclosed. In one embodiment, the method comprises generating a simulated spectral representation of a region of interest, identifying at least one of the plurality of materials as a material of interest within the region of interest, identifying other of the plurality of materials not identified as a material of interest as background materials within the region of interest, sensing, with the sensor, spectral data in the region of interest in the plurality of narrow spectral bands, selecting and transmitting only the spectral data of the one of more of the plurality of spectral bands of the sensor according to the simulated spectral representation of the material of interest. 1. A method of surveying a region of interest with a mobile platform having a sensor having a plurality of narrow spectral bands spanning a contiguous frequency space , comprising: 'a plurality of geospatial portions at least partially disposed in the region of interest, each geospatial portion having fused spectral characteristics of a plurality of materials disposed in the respective geospatial portion;', '(a) generating a simulated spectral representation of a region of interest, the simulated representation comprising(b) identifying at least one of the plurality of materials as a material of interest within the region of interest;(c) identifying other of the plurality of materials not identified as a material of interest as background materials within the region of interest;(d) sensing, with the sensor, spectral data in the region of interest in the plurality of narrow spectral bands;(e) selecting the spectral data of the one of more of the plurality of spectral bands of the sensor according to the simulated spectral representation of the material of interest and the simulated representation of the background ...

Подробнее
04-01-2018 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR DETECTING OBJECT FROM IMAGE

Номер: US20180005016A1
Автор: NAKASHIMA Daisuke
Принадлежит:

An image processing apparatus includes: an input unit configured to input image data; a detection unit configured to execute a detection process that detects a plurality of objects from the input image data; an integration unit configured to, after the detection process ends, integrate the plurality of detected objects on the basis of respective positions of the plurality of detected objects in the image data; an estimation unit configured to, before the detection process ends, estimate an integration time required for the integration unit to integrate the plurality of detected objects; and a termination unit configured to terminate the detection process by the detection unit on the basis of the estimated integration time and an elapsed time of the detection process by the detection unit. 1. An image processing apparatus , comprising:an input unit configured to input image data;a detection unit configured to execute a detection process that detects a plurality of objects from the input image data;an integration unit configured to, after the detection process ends, integrate the plurality of detected objects on the basis of respective positions of the plurality of objects in the image data;an estimation unit configured to, before the detection process ends, estimate an integration time required for the integration unit to integrate the plurality of detected objects; anda termination unit configured to terminate the detection process by the detection unit on the basis of the estimated integration time and an elapsed time of the detection process by the detection unit.2. The image processing apparatus according to claim 1 , wherein the termination unit terminates the detection process by the detection unit in a case where a total time obtained by adding the integration time and the elapsed time is equal to or longer than a predetermined time.3. The image processing apparatus according to claim 1 , wherein the estimation unit estimates claim 1 , at a point of time ...

Подробнее
02-01-2020 дата публикации

IDENTIFICATION CODE RECOGNITION SYSTEM AND METHOD

Номер: US20200005067A1
Автор: SEONG Ha Seung
Принадлежит:

The present disclosure provides an identification code recognition system, including: a camera configured to capture an image of an entire area of an identification code (ID code) engraved on a workpiece; a scanner configured to scan a partial area including at least one misrecognized character in the entire area of the ID code; and an image analyzer including a memory and a processor, wherein the memory is configured to store the ID code, data related to the ID code, an image captured by the camera, and an image scanned by the scanner, and the processor is configured to analyze the image captured by the camera and the image scanned by the scanner based on an image analysis logic. 1. An identification code recognition system , comprising:a camera configured to capture an image of an entire area of an identification code (ID code) that is engraved on a workpiece;a scanner configured to scan a partial area including at least one misrecognized character in the entire area of the ID code; andan image analyzer comprising a memory and a processor,wherein the memory is configured to store the ID code, data related to the ID code, an image captured by the camera, and an image scanned by the scanner, andthe processor is configured to analyze the image captured by the camera and the image scanned by the scanner based on an image analysis logic.2. The identification code recognition system of claim 1 , wherein the processor is configured to perform a first image analysis to obtain a recognition result value of the image captured by the camera claim 1 , wherein the first image analysis compares the image captured by the camera with a reference pattern.3. The identification code recognition system of claim 2 , wherein the processor is configured to perform a secondary image analysis to obtain a partial correction value of the image scanned by the scanner claim 2 , wherein the secondary image analysis compares the image scanned by the scanner with the reference pattern.4. The ...

Подробнее
02-01-2020 дата публикации

Methods and Systems for Assessing Histological Stains

Номер: US20200005459A1
Принадлежит:

The present disclosure includes methods of assessing a histologically stained specimen based on a determined color signature of a region of interest of the specimen. Such assessments may be performed for a variety of purposes including but not limited to assessing the quality of the histological stain, as part of identifying one or more biologically relevant features of the image, as part of differentiating one feature of the image from other features of the image, identifying an anomalous area of the stained specimen, classifying cells of the specimen, etc. Also provided are systems configured for performing the disclosed methods and computer readable medium storing instructions for performing steps of the disclosed methods. 167-. (canceled)68. A system for assessing a histologically stained specimen , the system comprising:a) a microscope;b) a digital color camera attached to the microscope and configured to obtain a digital color image of the specimen;c) a library comprising a plurality of reference color signatures specific to biological features of histologically stained reference specimens;d) image processing circuitry configured to:i) define on the digital color image a region of interest (ROI) based on a biological feature of the specimen;ii) separate the digital color image into individual color channels; andiii) determine a color signature for the ROI, wherein the color signature comprises quantification of one or more color parameters over the ROI for one or more of the individual color channels; andiv) compare the determined color signature to one or more reference color signatures of the plurality of reference color signatures of the library to assess the histologically stained specimen.69. The system of claim 68 , wherein the system comprises a single memory connected to the image processing circuitry that stores the library and is configured to receive the digital color image.70. The system of claim 68 , wherein the system comprises a first memory ...

Подробнее
02-01-2020 дата публикации

CAMERA SYSTEMS USING FILTERS AND EXPOSURE TIMES TO DETECT FLICKERING ILLUMINATED OBJECTS

Номер: US20200005477A1
Принадлежит:

The technology relates to camera systems for vehicles having an autonomous driving mode. An example system includes a first camera mounted on a vehicle in order to capture images of the vehicle's environment. The first camera has a first exposure time and being without an ND filter. The system also includes a second camera mounted on the vehicle in order to capture images of the vehicle's environment and having an ND filter. The system also includes one or more processors configured to capture images using the first camera and the first exposure time, capture images using the second camera and the second exposure time, use the images captured using the second camera to identify illuminated objects, use the images captured using the first camera to identify the locations of objects, and use the identified illuminated objects and identified locations of objects to control the vehicle in an autonomous driving mode. 1. A method comprising:capturing images using a first camera and a first exposure time, the first camera being mounted in order to capture images of an environment, the first camera being without an ND filter;capturing images using a second camera and a second exposure time, the second camera being mounted in order to capture images of the environment, the second exposure time being greater than or equal to the first exposure time and having an ND filter;using the images captured using the second camera to identify illuminated objects; andusing the images captured using the first camera to identify the locations of objects.2. The method of claim 1 , wherein the first camera and the second camera each include a near infrared filter.3. The method of claim 1 , wherein the second exposure time is on the order of milliseconds.4. The method of claim 3 , wherein the second exposure time is at least 5 milliseconds and the first exposure time is no greater than 5 milliseconds.5. The method of claim 1 , wherein the ND filter is selected according to the second ...

Подробнее
03-01-2019 дата публикации

PART RECOGNITION METHOD, INFORMATION PROCESSING APPARATUS, AND IMAGING CONTROL SYSTEM

Номер: US20190005344A1
Автор: Tanabe Satoshi
Принадлежит: FUJITSU LIMITED

A part recognition method includes: cutting, by a computer, out a plurality of partial images having different sizes using each of positions of an input image as a reference; calculating a probability that each of the partial images is an image indicating a part; calculating, for each of the positions, a score by integrating the probability for each of the partial images; and recognizing, based on the score for each of the positions, the part from the input image. 1. A part recognition method comprising:cutting, by a computer, out a plurality of partial images having different sizes using each of positions of an input image as a reference;calculating a probability that each of the partial images is an image indicating a part;calculating, for each of the positions, a score by integrating the probability for each of the partial images; andrecognizing, based on the score for each of the positions, the part from the input image.2. The part recognition method according to claim 1 , further comprising:creating, for each of the positions, a heat map in which the score is stored in a pixel corresponding to the respective positions; andidentifying, in the recognizing, a coordinate of a pixel having a maximum score in the heat map as a position coordinate of the part.3. The part recognition method according to claim 2 , further comprising;correcting the score for each of the positions in the heat map based on a relative positional relationship between the adjacent positions.4. The part recognition method according to claim 3 , wherein the correcting is performed by using a probability distribution model indicating a presence probability of one of the adjacent positions relative to the other of the adjacent positions in such a manner that a score corresponding to a position in which the presence probability is higher and in the heat map corresponding to the one of the adjacent positions is higher claim 3 , a score in the heat map corresponding to the other of the adjacent ...

Подробнее
02-01-2020 дата публикации

Smart Door Lock System and Lock Control Method Thereof

Номер: US20200005573A1
Принадлежит:

A smart door lock system provides an unlock authority of an electronically-controlled door lock mounted on a door to a remote computing device, thereby allowing the owner to remotely unlock the electronically-controlled door lock via the computing device rather than being physically present to perform the security check of the electronically-controlled door lock to open the door. Moreover, automatic transmission of the image data of the moving object in the field of view of a camera system in response to determining that one or more criteria are satisfied, facilitates door surveillance to help ensure personal and property's premise. 1. A smart door lock control method , comprising the steps of:detecting an object motion in the field view of a camera system which comprises a first camera device positioned at a door and facing towards an outer side thereof, wherein the first camera is configured to capture image data of the moving object in the area outside the door in the field of view thereof;capturing, by the first camera device of the camera system in response to detecting an object motion in the field view thereof, an image data of the moving object;determining, by a door lock controller processing the image data of the moving object, that one or more criteria are satisfied, wherein the one or more criteria comprise determining that the objects contained in the image data includes human, or determining that the image data contains human face regions;outputting, in response to determining that one or more criteria are satisfied, at least a portion of image data of the moving object for transmission to a remote computing device;receiving, by the door lock controller from the remote computing device, a unlock control command configured to cause the door lock controller to unlock an electronically-controlled door lock, wherein the electronically-controlled door lock is installed to control the opening and closing thereof between an opened position and locked position ...

Подробнее
03-01-2019 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING SYSTEM

Номер: US20190005613A1
Принадлежит:

A coordinate transformation matrix generation unit generates a coordinate transformation matrix corresponding to an image range obtained from a position and a direction of a viewpoint in a virtual space and the viewpoint. A scale transformation adjustment unit performs a scale transformation corresponding to a change of an image range with respect to an actual image by using a scale transformation by an optical zoom of an image pickup unit that generates the actual image, and generates a coordinate transformation matrix including the scale transformation. An image synthesis unit uses the coordinate transformation matrix generated by the coordinate transformation matrix generation unit to perform coordinate transformation of a virtual image, uses the coordinate transformation matrix generated by the scale transformation adjustment unit to perform coordinate transformation of the actual image, and synthesizes the virtual image and the actual image after the coordinate transformations. An image pickup control unit outputs a control signal corresponding to the scale transformation by the optical zoom to the image pickup unit, thereby causing the actual image that has been subjected to the scale transformation by the optical zoom to be generated. It is possible to maintain a resolution of the actual image in the synthesis image of the actual image and the virtual image to be good. 1. An image processing apparatus , comprising:a scale transformation adjustment unit that performs a scale transformation in coordinate transformation to draw an actual image to be synthesized with a virtual image in a virtual space by using a scale transformation by an optical zoom of an image pickup unit that generates the actual image.2. The image processing apparatus according to claim 1 , whereinthe scale transformation adjustment unit performs the scale transformation in the coordinate transformation by using the scale transformation by the optical zoom and a scale transformation not by ...

Подробнее
03-01-2019 дата публикации

METHOD FOR SETTING A CAMERA

Номер: US20190005681A1
Автор: Blott Gregor, Rexilius Jan
Принадлежит:

A method for setting a camera, in particular of a monitoring device, comprising a recording of an image of a region to be monitored, an analysis of the recorded image to associate at least one image region of the image with an object, identification of the object on the basis of the acquired image region and ascertainment of a size of the object by access to stored size information of the identified object and a scaling of the image region associated with the object in the recorded image to scale the remaining acquired image and/or to determine a distance of the camera to the acquired object. 1. A method for setting a camera , the method comprising:recording of an image of a region to be monitored,analyzing the recorded image to associate at least one image region of the image with an object,identifying the object on the basis of the acquired image region,ascertaining a size of the object by accessing an item of stored size information of the identified object,scaling the image region associated with the object in the recorded image, andscaling the remaining acquired image and/or determining a distance of the camera to the acquired object.2. The method according to claim 1 , further comprising ascertainment of coordinates of a boundary of the image region claim 1 , determination of a shape of the image region from the coordinates claim 1 , comparison of the ascertained shape to an expected shape of a view of the object in the case of a frontal view claim 1 , and determination of a surface normal of a surface of the acquired object with respect to a viewing direction of the camera from the comparison of the ascertained shape to the expected shape.3. The method according to claim 1 , further comprising acquisition of at least one symbol in the image region associated with the object claim 1 , association of at least one item of supplementary information from the at least one acquired symbol with the object claim 1 , and use of the item of associated information for ...

Подробнее
03-01-2019 дата публикации

Alert volume normalization in a video surveillance system

Номер: US20190005806A1
Принадлежит: Omni AI Inc

Techniques are disclosed for normalizing and publishing alerts using a behavioral recognition-based video surveillance system configured with an alert normalization module. Certain embodiments allow a user of the behavioral recognition system to provide the normalization module with a set of relative weights for alert types and a maximum publication value. Using these values, the normalization module evaluates an alert and determines whether its rareness value exceed a threshold. Upon determining that the alert exceeds the threshold, the module normalizes and publishes the alert.

Подробнее
20-01-2022 дата публикации

Dark circle detection and evaluation method and apparatus

Номер: US20220019765A1
Принадлежит: Honor Device Co Ltd

A dark circle detection and evaluation method includes: obtaining a to-be-processed image; extracting a dark circle region of interest from the to-be-processed image; performing color clustering on the dark circle region of interest to obtain n types of colors in the dark circle region of interest, where n is a positive integer; recognizing a dark circle region in the dark circle region of interest based on the n types of colors; and obtaining a dark circle evaluation result based on the dark circle region.

Подробнее
20-01-2022 дата публикации

SYSTEMS AND METHODS FOR GENERATING TYPOGRAPHICAL IMAGES OR VIDEOS

Номер: US20220019830A1
Автор: Mironica Ionut
Принадлежит:

This disclosure involves automatically generating a typographical image using an image and a text document. Aspects of the present disclosure include detecting a region of interest from the image and generating an object template from the detected region of interest. The object template defines the areas of the image, in which words of the text document are inserted. A text rendering protocol is executed to iteratively insert the words of the text document into the available locations of the object template. The typographical image is generated by rendering each word of the text document onto the available location assigned to the word. 1. A computer-implemented method comprising:receiving an image including an object;receiving text data;detecting a region of interest within the image using a trained neural network, the region of interest including the object;generating an object template from the region of interest, the object template defining an area of the object, in which the text data is to be inserted; andinserting the text data into the object template to visually form a contour of the object of the image.2. The computer-implemented method of claim 1 , wherein generating the object template further comprises:performing a contrast enhancement on the region of interest;detecting one or more edges within the region of interest; andapplying a morphological transformation on the one or more edges detected within the region of interest, the morphological transformation creating one or more defined areas aligned with the one or more edges detected within the region of interest, such that inserting the text data into the one or more defined areas visually forms the contour of the object.3. The computer-implemented method of claim 1 , wherein the text data includes one or more words claim 1 , and wherein inserting the text data further comprises:generating an integral image of the object template;sorting the one or more words of the text data according to frequency; ...

Подробнее
20-01-2022 дата публикации

Characterization of a ball game racket string pattern

Номер: US20220019831A1
Принадлежит: Head Technology GmbH

The present invention relates to a method for characterizing a string pattern of a ball game racket frame as well as to the representation of a string pattern image of a strung ball game racket frame.

Подробнее
20-01-2022 дата публикации

SYSTEMS, METHODS, AND STORAGE MEDIA FOR TRAINING A MACHINE LEARNING MODEL

Номер: US20220019853A1
Принадлежит: Vizit Labs, Inc.

Systems, methods, and storage media for training a machine learning model are disclosed. Exemplary implementations may select a set of training images for a machine learning model, extract object features from each training image to generate an object tensor for each training image, extract stylistic features from each training image to generate a stylistic feature tensor for each training image, determine an engagement metric for each training image, and train a neural network comprising a plurality of nodes arranged in a plurality of sequential layers. 1. A method , comprising:accessing, by one or more processors, a web-based property over a network, the web-based property containing a plurality of images;extracting, by the one or more processors, an image and image metadata associated with the image from the web-based property;determining, by the one or more processors, a target audience for the web-based property;identifying, by the one or more processors, a training data set that corresponds to the determined target audience;aggregating, by the one or more processors, the image and the image metadata into the training data set based on the target audience for the web-based property; andtraining, by the one or more processors, a machine learning model with the training data set with the aggregated image and image metadata.2. The method of claim 1 , wherein determining the target audience for the web-based property comprises:identifying, by the one or more processors, demographic, psychographic, or behavioral characteristics of users that visit the set of web-based properties; anddetermining, by the one or more processors, the target audience based on the identified demographic, psychographic, or behavioral characteristics.3. The method of claim 2 , further comprising:identifying, by the one or more processors, a set of web-based properties that correspond to the target audience for the web-based properties of the set.4. The method of claim 3 , wherein ...

Подробнее
10-01-2019 дата публикации

Image Capture and Identification System and Process

Номер: US20190008684A1
Принадлежит:

A digital image depicting a digital representation of a scene is captured by an image sensor of a vehicle. An identification system recognizes a real-world object from the digital image as a target object based on derived image characteristics and identifies object information about the target object based on the recognition. The identification provides the object information to the vehicle data system of the vehicle so that the vehicle data system can execute a control function of the vehicle based on the received object information. 1. A method for vehicle-based object recognition , comprising:obtaining, by an identification system, image data captured by an image sensor of a vehicle, the image data containing a digital representation of a real-world object within a scene;deriving, by the identification system, image characteristics of the real-world object from the digital representation of the real-world object in the image data;recognizing, by the identification system, the real-world object as a target object based on the derived image characteristics;identifying, by the identification system, object information about the target object based on the recognition;providing, by the identification system and to a vehicle data system of the vehicle, the object information; andexecuting, by the vehicle data system, a control function of the vehicle based on the object information.2. The method of claim 1 , wherein the control function comprises at least one of guidance claim 1 , navigation or maneuvering of the vehicle relative to the real-world object.3. The method of claim 1 , wherein the control function comprises planning claim 1 , by the vehicle data system claim 1 , a trajectory relative to the real-world object.4. The method of claim 1 , wherein the object information comprises at least one of location or orientation of the real-world object relative to the vehicle.5. The method of claim 1 , wherein the real-world object is a street sign.6. The method of claim ...

Подробнее
09-01-2020 дата публикации

IMAGE CAPTURE AND IDENTIFICATION SYSTEM AND PROCESS

Номер: US20200008978A1
Принадлежит:

A computing platform that analyzes a captured video stream to identify a document depicted in the video stream, validates identification information corresponding to the document to display an information address associated with the document, and that initiates a transaction based on the validation of the identification information associated with the document. 1. A method of conducting a financial transaction , the method comprising:analyzing, via at least one computing device processor, a video stream;identifying, via the at least one computing device processor, a document in the video stream;validating, via the at least one computing device processor, identification information pertinent to the document based on the video stream;displaying, via the at least one computing device processor, an information address where the information address is related to the document; andinitiating, via the at least one computing device processor and based on validation of the identification information, a financial transaction related to the document.2. The method of claim 1 , further comprising capturing claim 1 , by a mobile device claim 1 , the video stream.3. The method of claim 1 , wherein identifying the document comprises automatically capturing an image of the document from the video stream.4. The method of claim 3 , wherein identifying the document includes recognizing and decoding symbols according symbol type based on location in the image.5. The method of claim 1 , further comprising displaying a visual indicator with the video stream.6. The method of claim 1 , wherein the information address is associated with the financial transaction.7. The method of claim 1 , wherein the information address is associated with a bank account.8. The method of claim 1 , wherein the document is related to an individual who is a user of a mobile device.9. The method of claim 1 , further comprising allowing a user to perform ongoing interactions related to the financial transaction.10. ...

Подробнее
11-01-2018 дата публикации

Object Information Derived from Object Images

Номер: US20180011877A1
Принадлежит: NANT HOLDINGS IP LLC

An object is recognized from image data as a target object and linked to a user based on an interaction by the user, information about the target object is obtained and a purchase of the target object is initiated.

Подробнее
14-01-2016 дата публикации

SYSTEMS, METHODS, AND DEVICES FOR IMAGE MATCHING AND OBJECT RECOGNITION IN IMAGES USING TEMPLATE IMAGE CLASSIFIERS

Номер: US20160012317A1
Принадлежит:

An image matching technique locates feature points in a template image such as a logo and then does the same in a test image. Classifiers are trained for multiple template images and the classifiers are used to evaluate a match between a template image and a test image. 1. A computer-implemented method , implemented by hardware in combination with software , the method comprising:for each particular template image of a plurality of template images:(A) determining a set of feature points associated with a template image; '(B)(1) attempting to match a particular set of feature points associated said particular test image with the set of feature points associated with the template image;', '(B) for each particular test image of a plurality of test images(C) for at least some test images of said plurality of test images: evaluating said determining in (B) to determine (i) a first set of true positive is matches in said test images of said at least a portion of said template image in said particular test image, and (ii) a second set of non-matched images in said test images in which a portion of said template image is not considered to be in said particular test image;(D) training a classifier using (i) said first set of true positive matches, and (ii) ; and said second set of non-matched images; and(E) associating said classifier with said template image.2. The method of wherein the set of feature points associated with the template image are based on multiple versions of the template image.3. The method of wherein said classifier trained in (D) is a binary classifier.4. The method of wherein said classifier trained in (D) comprises a support vector machine (SVM).5. The method of wherein the classifier trained in (D) comprises a linear support vector machine.6. The method of wherein the classifier trained in (D) comprises a non-linear support vector machine.7. The method of wherein act (B) further comprises:.(B)(2) based on said attempting to match in (B)(1) determining ...

Подробнее
11-01-2018 дата публикации

Methods and Systems for Detecting Persons in a Smart Home Environment

Номер: US20180012077A1
Принадлежит:

The various implementations described herein include methods, devices, and systems for detecting motion and persons. In one aspect, a method is performed at a smart home system that includes a video camera, a server system, and a client device. The video camera captures video and audio, and wirelessly communicates, via the server system, the captured data to the client device. The server system: (1) receives and stores the captured data from the video camera; (2) determines whether an event has occurred, including detected motion; (3) in accordance with a determination that the event has occurred, identifies video and audio corresponding to the event; and (4) classifies the event. The client device receives information indicative of the identified events, displays a user interface for reviewing the video and audio stored by the remote server system, and displays the at least one classification for the event. 1. A smart home system , comprising: an image sensor having a field of view and being configured to capture video within the field of view;', 'a microphone configured to capture audio within proximity of the video camera; and', 'a wireless transceiver configured to wirelessly communicate, via a remote server system, the captured video and audio to a remote client device;, 'a video camera for use in a smart home environment, the video camera including receive and store the captured video and audio from the video camera;', 'determine whether an event has occurred, including determining whether motion is detected in the received video;', 'in accordance with a determination that the event has occurred, identify video and audio corresponding to the event; and', 'classify the event into at least one of a plurality of classifications, the classifications including motion detection and person detection; and, 'the remote server system including processors and memory storing first programs executable by the processors, the first programs including a first application ...

Подробнее
14-01-2016 дата публикации

MULTI-CUE OBJECT DETECTION AND ANALYSIS

Номер: US20160012606A1
Принадлежит:

Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations. 1. A computer-implemented method for distinguishing foreground objects of interest from a background model , the method comprising executing on a processing unit the steps of:dividing a region of interest of a video data image into a grid array of a plurality of individual cells;acquiring frame image data for each of the cells;determining a first background indication for each of the cells that have determined color intensities that do not exceed others of the determined color intensities for the cell by a color intensity differential threshold;determining a first foreground indication for each of the cells that have one of the determined color intensities greater than another of the determined color intensities for that cell by the color intensity differential threshold;determining a second background indication for each of the cells that have an accumulated energy of edges detected within the cells that less than an edge energy threshold;determining a second foreground indication for each of the cells that have an accumulated energy of edges detected within the cells that meets or exceeds the edge energy threshold;labelling as foreground or background each of the cells in response to applying a combination rule to the foreground indications and the background indications for the cells; andusing the frame image data from the cells labeled as foreground to define a foreground object.2. The method of claim 1 , further comprising:integrating computer-readable program code into a computer system comprising the processing ...

Подробнее
10-01-2019 дата публикации

DISPLAY APPARATUS AND DISPLAY METHOD

Номер: US20190012529A1
Автор: WANG Zifeng
Принадлежит: BOE Technology Group Co., Ltd.

A display apparatus and a display device are provided. The display apparatus includes: a first image acquisition device, configured to acquire a target image of a target region in the case that a human body is in the target region; an image processing device, configured to identify a body physical feature of the human body according to a human body image in the target image; an image generating device, configured to generate, according to the body physical feature, a virtual human body image corresponding to the human body and conforming to a target age; and a display device, configured to display the virtual human body image. A region displaying the virtual human body image is a virtual human body display region. 1. A display apparatus , comprising:a first image acquisition device, configured to acquire a target image of a target region in the case that a human body is in the target region;an image processing device, configured to identify a body physical feature of the human body according to a human body image in the target image;an image generating device, configured to generate, according to the body physical feature, a virtual human body image corresponding to the human body and conforming to a target age; anda display device, configured to display the virtual human body image,wherein, a region displaying the virtual human body image is a virtual human body display region.2. The display apparatus according to claim 1 , wherein claim 1 , the image processing device is further configured to identify claim 1 , according to the target image claim 1 , a region occupied by the human body image in the target image; the region occupied by the human body image corresponds to a human body corresponding region of the display device; and the virtual human body display region is located in the human body corresponding region.3. The display apparatus according to claim 2 , wherein claim 2 , the virtual human body display region changes in real time according to change of ...

Подробнее
10-01-2019 дата публикации

Image Processing Apparatus, Image Processing Method And Image Processing Program

Номер: US20190012556A1
Автор: Eyama Tamaki
Принадлежит:

An image processing apparatus includes: an image acquisition part that obtains an image including a captured target object; a first recognition part that extracts a feature related to the target object in the image and discriminates a category related to the target object based on a result of the feature extraction; a reliability acquisition part that obtains reliability of a discrimination result of the first recognition part with reference to data indicating reliability of the discrimination result stored in association with a candidate category classified by the first recognition part; a second recognition part that executes discrimination processing in accordance with the discrimination result of the first recognition part, extracts a feature related to the target object in the image, and discriminates the category related to the target object based on the result of the feature extraction and the reliability of the discrimination result of the first recognition part. 1. An image processing apparatus comprising:an image acquisition part that obtains an image including a captured target obj ect;a first recognition part that extracts a feature related to the target object in the image and discriminates a category related to the target object on the basis of a result of the feature extraction;a reliability acquisition part that obtains reliability of a discrimination result of the first recognition part with reference to data indicating reliability of the discrimination result stored in association with a candidate category classified by the first recognition part;a second recognition part that executes discrimination processing in accordance with the discrimination result of the first recognition part, extracts a feature related to the target object in the image, and discriminates the category related to the target object on the basis of the result of the feature extraction and the reliability of the discrimination result of the first recognition part.2. The image ...

Подробнее
10-01-2019 дата публикации

Object ingestion through canonical shapes, systems and methods

Номер: US20190012557A1
Принадлежит: NANT HOLDINGS IP LLC

An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.

Подробнее
14-01-2021 дата публикации

Object Detection Model Training Method and Apparatus, and Device

Номер: US20210012136A1
Принадлежит:

An object detection model training method performed by a computing device, includes obtaining a system parameter including at least one of a receptive field of a backbone network, a size of a training image, a size of a to-be-detected object in the training image, a training computing capability, or a complexity of the to-be-detected object, determining a configuration parameter based on the system parameter, establishing a variable convolution network based on the configuration parameter and a feature map of the backbone network, recognizing the to-be-detected object based on a feature of the variable convolution network, and training the backbone network and the variable convolution network, where a convolution core used by any variable convolution layer may be offset in any direction in a process of performing convolution. 1. An object detection model training method implemented by a computing device , wherein the method comprises:obtaining a system parameter comprising at least one of a receptive field of a backbone network of an object detection model, a first size of a training image, a second size of a to-be-detected object in the training image, a training computing capability, or a complexity of the to-be-detected object;{'sub': i', 'i, 'sup': th', 'th, 'determining a configuration parameter of i variable convolution networks based on the system parameter, wherein the configuration parameter comprises at least one of a quantity of the i variable convolution networks, a quantity Lof variable convolution layers comprised in an ivariable convolution network, a sliding span of a first convolution core of the ivariable convolution network, a maximum offset distance, or a third size of the first convolution core, and wherein both i and Lare integers greater than zero;'}obtaining the training image;establishing the backbone network based on the training image;establishing the i variable convolution networks based on a feature map of the backbone network and the ...

Подробнее
14-01-2021 дата публикации

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20210012300A1
Принадлежит: NEC Corporation

Provided is an information processing system including: a detection means for, based on a shape of an object carried in by a customer, detecting a carrying-in form of a product to be purchased by the customer; and a notification information generation means for generating notification information used for providing a notification in accordance with the carrying-in form to the customer. 1. An information processing system comprising:a detection unit that, based on a shape of an object carried in by a customer, detects a carrying-in form of a product to be purchased by the customer; anda notification information generation unit that generates notification information used for providing a notification in accordance with the carrying-in form to the customer.2. The information processing system according to claim 1 , wherein the notification is to urge the customer to move the product to a place in accordance with the carrying-in form.3. The information processing system according to claim 1 , wherein when the carrying-in form is a form of a cart loaded with the product claim 1 , the notification information generation unit generates notification information used for urging the customer to move the cart to a reading region of an identification information acquisition apparatus used for product registration.4. The information processing system according to claim 1 , wherein when the carrying-in form is a form of a basket or a bag containing the product claim 1 , the notification information generation unit generates notification information used for urging the customer to move the basket or the bag onto a reading stage having an identification information acquisition apparatus used for product registration.5. The information processing system according to claim 1 , wherein when the carrying-in form is a form of the product alone claim 1 , the notification information generation unit generates notification information used for urging the customer to place the product on a ...

Подробнее
09-01-2020 дата публикации

Object Information Derived From Object Images

Номер: US20200012679A1
Принадлежит:

An image-based transaction system includes a mobile device with an image sensor that is programmed to capture, via the image sensor, a digital images of a scene. The mobile device identifies a document using image characteristics from the captured images and acquires an image of at least a part of the document, and then identifies symbols in the image based on locations within the image of the document. The symbols can include alphanumeric symbols. The mobile device processes the symbols according to their type to obtain an address related to the document and the symbols and initiates a transaction associated with the identified document. 1. An image-based transaction system , comprising: digitally capturing images of a scene via the image sensor;', 'identifying an object using image characteristics from the digitally captured images;', 'automatically acquiring an image of at least part of the identified object in the scene;', 'identifying symbols, including alphanumeric symbols, in the image based on locations within the image;', 'processing the symbols according to their symbol type;', 'obtaining an address related to the identified object and the processed symbols; and', 'initiating a transaction associated with the identified object via a server, wherein the transaction is related to the address., 'a mobile device having an image sensor wherein the mobile device, when software in the mobile device is executed, is caused to execute operations comprising2. The system of claim 1 , wherein the system further comprises the server.3. The system of claim 2 , wherein initiating the transaction occurs over a network.4. The system of claim 1 , wherein the mobile device comprises a portable telephone.5. The system of claim 1 , wherein the address comprises an information address.6. The system of claim 1 , wherein the address comprises a network address.7. The system of claim 1 , wherein the address references pertinent information related to the identified object.8. The ...

Подробнее
14-01-2021 дата публикации

CAMERA SYSTEMS USING FILTERS AND EXPOSURE TIMES TO DETECT FLICKERING ILLUMINATED OBJECTS

Номер: US20210012521A1
Принадлежит:

The technology relates to camera systems for vehicles having an autonomous driving mode. An example system includes a first camera mounted on a vehicle in order to capture images of the vehicle's environment. The first camera has a first exposure time and being without an ND filter. The system also includes a second camera mounted on the vehicle in order to capture images of the vehicle's environment and having an ND filter. The system also includes one or more processors configured to capture images using the first camera and the first exposure time, capture images using the second camera and the second exposure time, use the images captured using the second camera to identify illuminated objects, use the images captured using the first camera to identify the locations of objects, and use the identified illuminated objects and identified locations of objects to control the vehicle in an autonomous driving mode. 1. A vehicle comprising:a first camera mounted in order to capture images of an environment of the vehicle, the first camera having a first exposure time and being without an ND filter; anda second camera mounted in order to capture images of the environment, the second camera having a second exposure time that is greater than or equal to the first exposure time and having an ND filter.one or more processors configured to:capture images using the first camera and the first exposure time;capture images using the second camera and the second exposure time;use the images captured using the second camera to identify light from at least one light emitting diode (LED);use the images captured using the first camera to identify a location of an object including the at least one LED; anduse the identified LED and the identified location of the object to control the vehicle in an autonomous driving mode.2. The vehicle of claim 1 , wherein the first camera and the second camera each include a near infrared filter.3. The vehicle of claim 1 , wherein the second exposure ...

Подробнее
09-01-2020 дата публикации

ADAPTING TO APPEARANCE VARIATIONS WHEN TRACKING A TARGET OBJECT IN VIDEO SEQUENCE

Номер: US20200012865A1
Принадлежит:

A method of tracking a position of a target object in a video sequence includes identifying the target object in a reference frame. A generic mapping is applied to the target object being tracked. The generic mapping is generated by learning possible appearance variations of a generic object. The method also includes tracking the position of the target object in subsequent frames of the video sequence by determining whether an output of the generic mapping of the target object matches an output of the generic mapping of a candidate object. 1. A device for tracking a target object in a sequence of images captured by a vehicle-mounted camera , comprising:a memory; and obtain a generic mapping of an object, wherein the generic mapping is based on appearance variations of the object;', 'obtain an image of the target object from the sequence of images captured by the vehicle-mounted camera; and', 'identify the target object in a subsequent image by determining that the features of the generic mapping of the object match features of the target object in the subsequent image., 'a processor, coupled to the memory, configured to2. The device of claim 1 , wherein the processor is further configured to:obtain an image of the object in a sequence of reference frames; andgenerate the generic mapping of the object by learning appearance variations of the object.3. The device of claim 2 , wherein learning appearance variations of the object comprises offline learning based on videos or images in a repository.4. The device of claim 1 , wherein the object corresponds to one or more of a vehicle wheel claim 1 , a vehicle windshield claim 1 , a traffic light claim 1 , or a traffic sign.5. The device of claim 1 , wherein determining whether features of the generic mapping of the object match features of the target object in the subsequent image comprises determining whether features of the generic mapping of the object match features of a candidate box of a plurality of candidate boxes ...

Подробнее
09-01-2020 дата публикации

SYSTEM AND METHOD OF VIDEO CONTENT FILTERING

Номер: US20200012866A1
Принадлежит:

An input video sequence from a camera is filtered by a process that comprises detecting temporal tracks of moving image parts from the input video sequence and assigning activity scores to temporal segments of the tracks, using respective predefined track dependent activity score functions for a plurality of different activity types. Based on this, event scores for are computed as a function of time. This computation is controlled by a definition of a temporal sequence of activity types or compound activity types for an event type. Successive intermediate scores are computed, each as a function of time for a respective activity types or compound activity types in the temporal sequence. The successive intermediate scores for each respective activity types or compound activity are computed from a combination of the intermediate score for a preceding activity type or compound activity type in the temporal sequence at a preceding time and activity scores that were assigned to segments of the tracks after the preceding time, for the activity type or activity types defined by the compound activity type defined by the respective activity types or compound activity types in the temporal sequence. One of the computed event scores for a selected time. The computation of the selected event score is traced back to identify intermediate scores that were used to compute the selected one of the event scores and to identify segments of the tracks for which the assigned activity scores were used to compute the identified intermediate scores. An output video sequence and/or video image is generates that selectively includes the image parts associated with the selected segments. 1. A method of filtering an input video sequence captured by a camera , the method comprisingdetecting temporal tracks of moving image parts from the input video sequence;assigning activity scores to temporal segments of the tracks, using respective predefined track dependent activity score functions for a ...

Подробнее
09-01-2020 дата публикации

METHOD FOR 2D FEATURE TRACKING BY CASCADED MACHINE LEARNING AND VISUAL TRACKING

Номер: US20200012882A1
Принадлежит: SONY CORPORATION

A method for 2D feature tracking by cascaded machine learning and visual tracking comprises: applying a machine learning technique (MLT) that accepts as a first MLT input first and second 2D images, the MLT operating on the images to provide initial estimates of a start point for a feature in the first image and a displacement of the feature in the second image relative to the first image; applying a visual tracking technique (VT) that accepts as a first VT input the initial estimates of the start point and the displacement, and that accepts as a second VT input the two 2D images, processing the first and second inputs to provide refined estimates of the start point and the displacement; and displaying the refined estimates in an output image. 1. A method for 2D feature tracking by cascaded machine learning and visual tracking , the method comprising:applying a machine learning technique (MLT) that accepts as a first MLT input first and second 2D images, the MLT operating on the images to provide initial estimates of a start point for a feature in the first 2D image and a displacement of the feature in the second 2D image relative to the first image;applying a visual tracking technique (VT) that accepts as a first VT input the initial estimates of the start point and the displacement, and that accepts as a second VT input the first and second 2D images, processing the first and second inputs to provide refined estimates of the start point and the displacement; anddisplaying the refined estimates in an output image.2. The method of further comprising claim 1 , before applying the MLT:extracting the first and second images as frames from a camera or video stream; andtemporarily storing the extracted first and second images in first and second image buffers.3. The method of claim 2 , further comprising claim 2 , before applying the MLT:applying a 2D feature extraction technique to the first 2D image to identify the feature; andproviding information on the identified ...

Подробнее
09-01-2020 дата публикации

CLASSIFICATION BASED ON ANNOTATION INFORMATION

Номер: US20200012904A1
Принадлежит:

Systems and techniques for classification based on annotation information are presented. In one example, a system trains a convolutional neural network based on training data and a plurality of images. The plurality of images is associated with a plurality of masks, a plurality of image level labels, and/or a bounding box. The system also generates a first loss function based on the plurality of masks, a second loss function based on the plurality of image level labels, and a third loss function based on the bounding box. Furthermore, the system generates a fourth loss function based on the first loss function, the second loss function and the third loss function, where the fourth loss function is iteratively back propagated to tune parameters of the convolutional neural network. The system also predicts a classification label for an input image based on the convolutional neural network. 1. A machine learning system , comprising:a memory that stores computer executable components; a training component that trains a convolutional neural network based on training data and a plurality of images, wherein the training data is associated with a plurality of patients from at least one imaging device, and wherein the plurality of images is associated with a plurality of masks from a plurality of objects, or a plurality of image level labels for the plurality of images, or a bounding box that links a region of interest to a class label;', 'a first loss function component that generates a first loss function based on the plurality of masks;', 'a second loss function component that generates a second loss function based on the plurality of image level labels for the plurality of images;', 'a third loss function component that generates a third loss function based on the bounding box that links a region of interest to the class label;', 'a fourth loss function component that generates a fourth loss function based on the first loss function, the second loss function and the ...

Подробнее
09-01-2020 дата публикации

APPARATUS AND METHOD FOR RECOGNIZING OBJECT IN IMAGE

Номер: US20200013190A1
Принадлежит: LG ELECTRONICS INC.

An apparatus and a method for recognizing an object in an image are disclosed. The method for recognizing an object in an image may include: executing a deep neural network algorithm which has been trained in advance to recognize an object in an image, on a first image inputted from a camera module; finding an amount of change in image between the first image and a second image inputted from the camera module after the first image according to a predetermined cycle; and in response that an object has been detected from the first image as a result of executing the deep neural network algorithm, tracking the position of the detected object from the second image, based on the found amount of change in image. 1. A method for recognizing an object in an image , the method comprising:executing a deep neural network (DNN) algorithm which has been trained in advance to recognize an object in an image, on a first image inputted from a camera module;finding an amount of change in image between the first image and a second image inputted from the camera module after the first image according to a predetermined cycle; andin response that an object has been detected from the first image as a result of executing the deep neural network algorithm, tracking the position of the detected object from the second image, based on the found amount of change in image.2. The method of claim 1 , further comprising:after the finding an amount of change in image, determining the reliability of the result of finding the amount of change in image,wherein the tracking the position of the detected object comprises:in response that the result of determining the reliability indicates that the reliability of the result of finding the amount of change in image is lower than a predetermined threshold,estimating the position of the object based on the result of finding the amount of change in image, and setting a first region of interest in the second image to include the estimated position of the ...

Подробнее
15-01-2015 дата публикации

METHOD OF AUGMENTED REALITY COMMUNICATION AND INFORMATION

Номер: US20150015609A1
Принадлежит: ALCATEL-LUCENT

A communication method comprising the following operations: 1. A communication method comprising the following operations:taking a shot, by a mobile terminal, in the environment of the terminal;analyzing the shot to detect the presence therein of an object;when an object has been identified by its image, identifying at least one place associated with said object;taking into account the location of the terminal;selecting from among a plurality of places associated with the object, a place associated with the object based on said location, the selected place being the place closest to the terminal's location;displaying on the terminal at least one place and piece of information associated with the object.2. A method according to claim 1 , wherein the identified object is chosen from the group comprising bar codes claim 1 , tags claim 1 , outdoor advertisements claim 1 , advertisements printed in magazines claim 1 , or advertisements displayed on screens.3. A method according to claim 1 , wherein the display on the terminal of at least one place associated with the object is performed using augmented reality.4. A method according to claim 1 , wherein it comprises a step of activating a guidance procedure from said location to at least one place associated with said object.5. A communication method according to claim 1 , further comprising the following operations:establishment of a media session between the mobile terminal and a remote communication system;transmission of the shot by the mobile terminal to the communication system during the media session;analysis of the shot, within the communication system, to detect therein the presence of said object.6. A communication system comprising:a database containing a plurality of images, each of which is associated with a predetermined place;an application server, connected to the database, and configured to perform an image analysis of a shot received from a mobile terminal in order to identify within said shot an object ...

Подробнее
10-01-2019 дата публикации

Phishing Detection Method And System

Номер: US20190014149A1
Принадлежит:

A method of detecting a phishing event comprises acquiring an image of visual content rendered in association with a source, and determining that the visual content includes a password prompt. The method comprises performing an object detection, using an object detection convolutional network, on a brand logo in the visual content, to detect one or more targeted brands. Spatial analysis of the visual content may be performed to identify one or more solicitations of personally identifiable information. The method further comprises determining, based on the object detection and the spatial analysis, that at least a portion of the visual content resembles content of a candidate brand, and comparing the domain of the source with one or more authorized domains of the candidate brand. A phishing event is declared when the comparing indicates that the domain of the source is not one of the authorized domains of the candidate brand. 1. A method of detecting a phishing event , comprising: acquiring an image of visual content rendered in association with a source, and identifying a domain of the source;', 'performing an object detection, using an object detection convolutional neural network (CNN), on one or more brand logos located within the visual content, to detect an instantiation of one or more targeted brands;', 'determining, based on the object detection, that at least a portion of the visual content resembles content of a candidate brand;', 'comparing the domain of the source with one or more authorized domains of the candidate brand; and', 'declaring a phishing event when the comparing indicates that the domain of the source is not one of the authorized domains of the candidate brand., 'by a processor and a memory with computer code instructions stored thereon, the memory operatively coupled to the processor such that, when executed by the processor, the computer code instructions cause the system to implement2. The method of claim 1 , wherein the source is a ...

Подробнее
10-01-2019 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD THEREOF AND STORAGE MEDIUM

Номер: US20190014301A1
Автор: Ota Yuya
Принадлежит:

An image processing apparatus, which executes image processing for generating a virtual viewpoint image using a plurality of images captured by a plurality of imaging units that image an imaging space from different viewpoints, identifies a specific object among a plurality of objects inside the imaging space, and carries out image processing for generating the virtual viewpoint image on the plurality of objects inside the imaging space. The image processing apparatus executes the image processing on the identified specific object using images captured by more imaging units than in the image processing executed on other objects. 1. An image processing apparatus that executes image processing for generating a virtual viewpoint image using a plurality of images captured by a plurality of imaging units that image an imaging space from different viewpoints , the apparatus comprising:an identification unit configured to identify a specific object among a plurality of objects inside the imaging space; anda processing unit configured to carry out image processing for generating the virtual viewpoint image on the plurality of objects inside the imaging space,wherein the processing unit executes the image processing on the specific object identified by the identification unit using images captured by more imaging units than in the image processing executed on other objects.2. The image processing apparatus according to claim 1 ,wherein the image processing executed by the processing unit is processing for generating a three-dimensional model of an object.3. The image processing apparatus according to claim 1 ,wherein the image processing executed by the processing unit is processing for rendering an image of an object from an arbitrary viewpoint.4. The image processing apparatus according to claim 1 , further comprising:a selecting unit configured to select the imaging units providing the images used in the image processing on the specific object and the imaging units ...

Подробнее
16-01-2020 дата публикации

IMAGE CAPTURE AND IDENTIFICATION SYSTEM AND PROCESS

Номер: US20200016003A1
Принадлежит:

An image-based transaction system includes a mobile device with an image sensor that is programmed to capture, via the image sensor, a video stream of a scene. The mobile device identifies a document using image characteristics from the video stream and acquires an image of at least a part of the document, and then identifies symbols in the image based on locations within the image of the document. The symbols can include alphanumeric symbols. The mobile device processes the symbols according to their type to obtain an address related to the document and the symbols and initiates a transaction associated with the identified document. 1. An image-based transaction system: digitally capturing a video stream of a scene via the image sensor;', 'identifying a document using image characteristics from the digitally captured video stream;', 'automatically acquiring an image of at least part of the document in the scene;', 'identifying symbols, including alphanumeric symbols, in the image based on locations within the image of the document;', 'processing the symbols according to their symbol type;', 'obtaining an address related to the identified document and the processed symbols; and', 'initiating a transaction associated with the identified document via a server., 'a mobile device having an image sensor, wherein the mobile device, when software in the mobile device is executed, is caused to execute operations comprising2. The system of claim 1 , further comprising the server.3. The system of claim 1 , wherein the transaction comprises an on-line transaction.4. The system of claim 1 , wherein the transaction is with an account.5. The system of claim 4 , wherein the transaction is with a bank account.6. The system of claim 4 , wherein the transaction is with at least one of the following types of accounts: an account liked to a user claim 4 , an account linked to the mobile device claim 4 , or a credit card account.7. The system of claim 1 , wherein the document identifies ...

Подробнее
03-02-2022 дата публикации

In-Call Experience Enhancement for Assistant Systems

Номер: US20220036013A1
Принадлежит:

In one embodiment, a method includes establishing a video call between a plurality of client systems, wherein access to an assistant system is persistently maintained during the video call, receiving, from a first client system of the plurality of client systems, a request by a first user to be performed by the assistant system during the video call, wherein the request references one or more activities associated with one or more users associated with the plurality of client systems, analyzing, by a context engine of the assistant system, images of a scene of the video call to identify the one or more activities within the scene, instructing the assistant system to execute the request based on the identified one or more activities, and sending, to one or more of the plurality of client systems, a response to the request while maintaining the video call between the plurality of client systems. 1. A method comprising , by one or more computing systems:establishing a video call between a plurality of client systems, wherein access to an assistant system is persistently maintained during the video call;receiving, from a first client system of the plurality of client systems, a request by a first user to be performed by the assistant system during the video call, wherein the request references one or more activities associated with one or more users associated with the plurality of client systems;analyzing, by a context engine of the assistant system, images of a scene of the video call to identify the one or more activities within the scene;instructing the assistant system to execute the request based on the identified one or more activities; andsending, to one or more of the plurality of client systems, a response to the request while maintaining the video call between the plurality of client systems.2. The method of claim 1 , wherein the request by the first user further references an instruction to perform a virtual activity with respect to one or more of the ...

Подробнее