Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 18179. Отображено 100.
28-03-2013 дата публикации

APPARATUS AND METHOD FOR IDENTIFYING A STILL IMAGE CONTAINED IN MOVING IMAGE CONTENTS

Номер: US20130077876A1
Принадлежит:

Apparatus for identifying one or more still images in one or more moving image contents. An identifying unit is configured to identify one or more still images included in the moving image contents having one or more features that closely resemble particular features. A display controller is configured to cause the display on a timeline associated with the moving image contents of the location of an identified still image in at least one of the moving image contents. 1. Apparatus for identifying one or more still images in one or more moving image contents , said apparatus comprising:an identifying unit configured to identify one or more still images included in said one or more moving image contents having one or more features that closely resemble one or more particular features; anda display information generator configured to generate information to cause the display, on a timeline associated with the one or more moving image contents, of the location of an identified still image in at least one of said moving image contents.2. The apparatus of claim 1 , wherein said identifying unit compares said one or more particular features to corresponding features of still images in said one or more moving image contents.3. The apparatus of claim 1 , wherein said identifying unit identifies scenes in each of said moving image contents that contain a still image whose features closely resemble said one or more particular features; andwherein said display information generator generates information to cause the display of said identified scenes.4. The apparatus of claim 1 , wherein said display information generator is operable to generate information to cause different moving image contents to be displayed concurrently.5. The apparatus of claim 1 , wherein said display information generator generates information to cause the display claim 1 , in alignment on a common timeline axis claim 1 , of the locations in said one or more moving image contents at which the identified ...

Подробнее
18-04-2013 дата публикации

METHOD FOR VISUALIZING ZONES OF HIGHER ACTIVITY IN SURVEILLANCE SCENES

Номер: US20130094703A1
Автор: Gottschlag Daniel
Принадлежит: ROBERT BOSCH GMBH

The invention relates to a method for visualizing zones of higher activity in a monitoring scene monitored by at least one monitoring device ( ), wherein moving objects () are identified and/or tracked () by the at least one monitoring device. A spatial localization () of the moving objects () is determined (), the zones of higher activity are detected and a visualization of zones of higher activity of the moving objects () is performed. 1200220230240250111111111210. A method for visualizing () zones of higher activity ( , , , ) in a surveillance scene monitored by at least one surveillance apparatus ( , ′ , ″; ) , the method comprising:{'b': 102', '102', '102', '112', '112', '112', '111', '111', '111', '210, 'a) at least one of identifying and tracing (, ′, ″) moving objects (, ′, ″) by the at least one surveillance apparatus (, ′, ″; ),'}{'b': 103', '103', '103', '113', '113', '113', '112', '112', '112, 'b) establishing (, ′, ″) a spatial localization (, ′, ″) of the moving objects (, ′, ″) in the surveillance scene, and'}{'b': 200', '220', '230', '240', '250', '112', '112', '112, 'c) determining and visualizing () zones of higher activity (, , , ) of the moving objects (, ′, ″) in the surveillance scene.'}2220230240250112112112. The method as claimed in claim 1 , wherein the zones of higher activity ( claim 1 , claim 1 , claim 1 , ) are determined on the basis of at least one of the number claim 1 , speed claim 1 , acceleration and the jerkiness of identified moving objects ( claim 1 , ′ claim 1 , ″).3220230240250. The method as claimed in wherein the zones of higher activity ( claim 1 , claim 1 , claim 1 , ) are visualized by at least one of color-coding and size-coding claim 1 , with a measure of activity being assigned a color or a size.4220230240250. The method as claimed in wherein the zones of higher activity ( claim 1 , claim 1 , claim 1 , ) are determined on the basis of at least one of bundling movements and bundling movement trajectories. ...

Подробнее
18-04-2013 дата публикации

VEHICULAR IMAGE SENSING SYSTEM

Номер: US20130096777A1
Принадлежит: DONNELLY CORPORATION

An image sensing system for a vehicle includes an imager disposed at or proximate to an in-cabin portion of a vehicle windshield and having a forward field of view to the exterior of the vehicle through the vehicle windshield. The photosensor array of the imager is operable to capture image data. The image sensing system identifies objects in the forward field of view of the imager via processing of captured image data by an image processor. The photosensor array may be operable to capture frames of image data and the image sensing system may include an exposure control which determines an accumulation period of time that the photosensor array senses light when capturing a frame of image data. Identification of objects may be based at least in part on at least one of (i) shape, (ii) luminance, (iii) geometry, (iv) spatial location, (v) motion and (vi) spectral characteristic. 1. An image sensing system for a vehicle , said image sensing system comprising:an imager comprising a two-dimensional CMOS photosensor array of light sensing photosensor elements;wherein said imager is disposed at or proximate to an in-cabin portion of a windshield of a vehicle equipped with said image sensing system, and wherein said imager has a forward field of view to the exterior of the equipped vehicle through the windshield of the equipped vehicle;wherein said photosensor array is operable to capture image data;a control comprising an image processor;wherein said image sensing system identifies objects in said forward field of view of said imager via processing of captured image data by said image processor;wherein said image sensing system is operable to identify at least one of (i) approaching headlights, (ii) leading taillights, (iii) lane markers, (iv) traffic signs, (v) traffic lights, (vi) stop signs and (vii) caution signs;wherein said photosensor array is operable to capture frames of image data; andwherein said image sensing system includes an exposure control which determines ...

Подробнее
25-04-2013 дата публикации

Method for Combining a Road Sign Recognition System and a Lane Detection System of a Motor Vehicle

Номер: US20130101174A1
Принадлежит: Conti Temic Microelectronic GmbH

The invention relates to a method for combining a road sign recognition system () and a lane detection system () of a motor vehicle, wherein the road sign recognition system () generates road sign information from sensor data of a camera-based or video-based sensor system () and the lane detection system () generates lane course information from said sensor data; according to the invention, meaning-indicating data for road signs are generated from the lane course information, the meaning-indicating data for road signs are used to check the plausibility of and/or to interpret the road sign information, data indicating the course of the lane are generated from the road sign information, and the data indicating the course of the lane are used to check the plausibility of and/or to interpret the lane course information. 112142. A method for combining a road sign recognition system () and a lane detection system () of a motor vehicle , wherein the road sign recognition system () generates road sign information from sensor data of a camera-based or video-based sensor system () and the lane detection system () generates lane course information from said sensor data , meaning-indicating data for road signs are generated from the lane course information,', 'the meaning-indicating data for road signs are used to check the plausibility of and/or to interpret the road sign information,', 'data indicating the course of the lane are generated from the road sign information, and', 'the data indicating the course of the lane are used to check the plausibility of and/or to interpret the lane course information., 'characterized in that'}2. The method according to claim 1 ,characterized in that{'b': 1', '2, 'a plausibility check of a road sign recognized by means of the road sign recognition system () is performed using lane-course-indicating data of the lane detection system ().'}3. The method according to claim 1 , [{'b': '2', 'information about a line structure of a fast lane of a ...

Подробнее
02-05-2013 дата публикации

SYSTEM AND METHOD FOR TRACKING MOVING OBJECTS

Номер: US20130108235A1
Автор: KARAZI Uri
Принадлежит: ISRAEL AEROSPACE INDUSTRIES LTD.

A method for tracking an object that is embedded within images of a scene, including: in a sensor unit, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, receiving the images, receiving a command for selecting an object of interest in a given image and determining object data associated with the object and transmitting the object data to the sensor unit. In the sensor unit, identifying the given image and the object of interest using the object data, and tracking the object in other images. If the object cannot be located in the latest image of the stored succession of images, using information of images in which the object was located to predict estimated real-time location thereof and generating direction commands to the movable sensor for generating realtime images of the scene and locking on the object. 1. A method of selecting a moving object within images of a scene , comprising:a. receiving a succession of images;b. freezing or slowing down a rate of a given image of said succession of images and selecting an object of interest in the given image, as if said object is stationary; andc. determining object data associated with said object.2. The method according claim 1 , wherein said receiving is through a communication link.3. The method according to claim 2 , further comprising: transmitting through said link at least said object data.4. The method according to claim 1 , further comprising zooming the given image after it has been frozen.5. The method according to claim 1 , further comprising enhancing the given image.6. The method according to claim 1 , further comprising: pointing in the vicinity of said object thereby zooming the given image according to image boundaries substantially defined by said pointing claim 1 , and selecting said object by pointing thereto on the zoomed given image.7. The method according to claim 1 , further comprising: using the object data for tracking ...

Подробнее
09-05-2013 дата публикации

METHOD FOR POSE INVARIANT FINGERPRINTING

Номер: US20130114856A1
Принадлежит: SRI INTERNATIONAL

A computer-implemented method for matching objects is disclosed. At least two images where one of the at least two images has a first target object and a second of the at least two images has a second target object are received. At least one first patch from the first target object and at least one second patch from the second target object are extracted. A distance-based part encoding between each of the at least one first patch and the at least one second patch based upon a corresponding codebook of image parts including at least one of part type and pose is constructed. A viewpoint of one of the at least one first patch is warped to a viewpoint of the at least one second patch. A parts level similarity measure based on the view-invarient distance measure for each of the at least one first patch and the at least one second patch is applied to determine whether the first target object and the second target object are the same or different objects. 1. A method for detecting similarity between objects comprising:detecting a first target object from a first image with a first pose and a second target object from a second image with a second pose;extracting a first set of patches from the first image and a second set of patches from the second image;mapping between the first set of patches and the second set of patches using distance measures;warping the first set of patches and the second set of patches locally based on the mapping;calculating a similarity measure between the locally warped first set of patches and the second set of patches; anddetermining that a match exists between the first target object and the second target object when the similarity measure exceeds a predetermined threshold.2. The method of claim 1 , wherein the mapping comprises a set of neighborhood local patches to a distinctive shape claim 1 , wherein a distance to each mapping entry forms an embedded distance vector utilized as a part encoding.3. The method of claim 2 , wherein the mapping ...

Подробнее
09-05-2013 дата публикации

METHODS, APPARATUS, AND SYSTEMS FOR ACQUIRING AND ANALYZING VEHICLE DATA AND GENERATING AN ELECTRONIC REPRESENTATION OF VEHICLE OPERATIONS

Номер: US20130116855A1
Принадлежит:

Geo-referenced and/or time-referenced electronic drawings may be generated based on electronic vehicle information to facilitate documentation of a vehicle-related event. A symbols library, a collection of geo-referenced images, and any data acquired from one or more vehicles may be stored in memory for use in connection with generation of such drawings, and a drawing tool graphical user interface (GUI) may be provided for electronically processing vehicle data and geo-referenced images. Processed geo-referenced images may be saved as event-specific images, which may be integrated into, for example, an electronic vehicle accident report for accurately depicting a vehicle accident. 1. An apparatus for documenting an incident involving a first vehicle at an incident site , the apparatus comprising:a communication interface;a memory to store processor-executable instructions; anda processing unit communicatively coupled to the communication interface and the memory, wherein upon execution of the processor-executable instructions by the processing unit, the processing unit:controls the communication interface to electronically receive at least one input image of a geographic area including the incident area;acquires vehicle-based information relating to the first vehicle at a time during or proximate the incident;renders, based at least in part on the vehicle-based information, a marked-up image including a first representation of at least a portion of the incident overlaid on the at least one input image; andfurther controls the communication interface and/or the memory to electronically transmit and/or electronically store information relating to the marked-up digital image so as to document the incident with respect to the geographic area.2. The apparatus of claim 1 , wherein the first representation comprises a representation of the first vehicle.3. The apparatus of claim 2 , wherein the processing unit: scales the representation of the first vehicle to a scale of ...

Подробнее
16-05-2013 дата публикации

INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM

Номер: US20130121533A1
Принадлежит: BEHAVIORAL RECOGNITION SYSTEMS, INC.

A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. 1retrieving a first sequence and a second sequence, each providing an ordered string of labels, wherein each label corresponds to a cluster in an adaptive resonance theory (ART) network, wherein the strings of labels are generated by mapping kinematic data vectors generated for a first foreground object and a second foreground object detected in the input stream of video frames, respectively, to nodes of a self-organizing map (SOM) and clustering the nodes of the SOM using the ART network, and wherein the first sequence and the second sequence correspond to an observed interaction between the first foreground object and the second foreground object;identifying one or more segments in each of the first and second sequences, wherein each segment includes a subsequence of the ordered string of labels in the first and second sequences;determining a probability of observing the interaction between the first foreground object and the second foreground object, relative to a probability distribution generated from an ngram trie, wherein the ngram trie is generated from a plurality of previously observed sequences, each storing an ordered string of labels assigned to clusters in ...

Подробнее
16-05-2013 дата публикации

DETERMINING REPRESENTATIVE IMAGES FOR A VIDEO

Номер: US20130121586A1
Автор: Peters Marc Andre
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS N.V.

A video comprises at least one shot (SH), which is a continuous sequence of images representing a scene viewed from a particular location. Images are selected from a shot (SH) so as to obtain a continuous sequence of selected images (SI) that are evenly distributed throughout the shot. At least one continuous subsequence (SB SB SB) of selected images that meet a predefined similarity test is identified. An image is selected from a continuous portion (SP) of the shot that coincides in time with the longest continuous subsequence (SB) of selected images that meet the predefined similarity test. The image that is selected constitutes a representative image (RI) for the shot. 1. A method of determining a representative image (RI) for at least one shot (SH) in a video (VD) , a shot being a continuous sequence of images representing a scene viewed from a particular location , the method comprising:a shot sampling step (SHS) in which images are selected from a shot so as to obtain a continuous sequence of selected images (SI) that are evenly distributed throughout the shot;a stable shot portion identification step (SPI) of identifying at least one continuous subsequence (SB) of selected images that meet a predefined similarity test; anda representative image designation step (RID) in which an image is selected from a continuous portion (SP) of the shot that coincides in time with the longest continuous subsequence of selected images that meet the predefined similarity test, whereby the image that is selected constitutes a representative image (RI) for the shot.2. A method of according to claim 1 , wherein claim 1 , in the stable shot portion identification step (SPI) claim 1 , the following series of steps are carried out for respective selected images:a measure of difference determining step (IPD, IPC) in which at least one image property(IP) of a selected image is compared with that of another selected image as to determine a measure of difference (DM) for the selected ...

Подробнее
30-05-2013 дата публикации

Automotive Camera System and Its Calibration Method and Calibration Program

Номер: US20130135474A1
Принадлежит: Clarion Co., Ltd.

Provided is a calibration method for an automotive camera system that can easily calibrate intrinsic parameters and extrinsic parameters of an image taking unit. A calibration method for an automotive camera system includes: recognizing a recognition target object including a preset straight line portion, from an image taken by each of image taking units to , and extracting feature points of the recognition target object from the image; projecting the feature points onto a virtual spherical surface , estimating intrinsic parameters of the image taking units to on the basis of a shape of a feature point sequence formed on the virtual spherical surface , and calibrating the estimated intrinsic parameters; and calculating an overhead point of view of the image on the basis of the feature points, estimating extrinsic parameters of the image taking units to on the basis of the calculated overhead point of view, and calibrating the estimated extrinsic parameters. 1. A calibration method for an automotive camera system that recognizes an image of an environment around a vehicle , the method comprising:taking the image around the vehicle;recognizing a recognition target object including a preset straight line portion, from the taken image, and extracting feature points of the recognition target object from the image;projecting the extracted feature points onto a virtual spherical surface, estimating an intrinsic parameter of an image taking unit on a basis of a shape of a feature point sequence formed on the virtual spherical surface, and calibrating the estimated intrinsic parameter; andcalculating an overhead point of view of the image on a basis of the extracted feature points, estimating an extrinsic parameter of the image taking unit on a basis of the calculated overhead point of view, and calibrating the estimated extrinsic parameter.2. The calibration method according to claim 1 , comprising:extracting, as the feature points, an edge point sequence along the straight ...

Подробнее
13-06-2013 дата публикации

METHOD AND APPARATUS FOR DETECTING ROAD PARTITION

Номер: US20130148856A1
Принадлежит:

A method and an apparatus are used for detecting a road partition, the method comprising: a step of obtaining a disparity top view having a road area; and a step of detecting parallel lines as the road partitions from the disparity top view. 1. A method for detecting a road partition , comprising:a step of obtaining a disparity top view having a road area; anda step of detecting parallel lines as the road partitions from the disparity top view.2. The method for detecting a road partition according to claim 1 , wherein the step of detecting the parallel lines from the disparity top view comprises:a step of regarding points with a zero disparity value as basic points, and calculating included angles between straight lines, that are determined by points having non-zero disparity values in the disparity top view and the basic points and an equal disparity line, so as to obtain angle distributions of the included angles relating to the basic points; anda step of detecting the parallel lines according to the angle distributions of the included angles of the basic points.3. The method for detecting a road partition according to claim 2 , wherein the step of detecting the parallel lines according to the angle distributions of the included angles of the basic points comprises:a step of regarding top N basic points in descending order of degree of collectivity of the angle distributions of the included angles and/or the basic points having a value of degree of collectivity more than a predetermined threshold of degree of collectivity as road vanishing points, where N is an integer greater than or equal to one, the road vanishing point is a point at which the parallel road partitions intersect in the disparity top view, and the points at which the parallel road partitions in different directions intersect in the disparity top view are different; anda step of regarding straight lines determined by the included angles, that have occurrence degrees of angles in the angle ...

Подробнее
27-06-2013 дата публикации

SYSTEM AND METHOD FOR INDENTIFYING IMAGE LOCATIONS SHOWING THE SAME PERSON IN DIFFERENT IMAGES

Номер: US20130163819A1

The same person is automatically recognized in different images from his or her clothing. Color pixel values of a first and second image are captures and areas are selected for a determination whether they show the same person. First histograms of pixels area are computed, representing sums of contributions from pixels with color values in histogram bins. Each histogram bin corresponds to a combination of a range of color values and a range of heights in the areas. The ranges of color values are normalized relative to a distribution of color pixel values in areas. Furthermore, second histograms of pixels in the areas are computed, the second histograms representing sums of contributions from pixels with color values in further histogram bins. The further histogram bins are at least partly unnormalized. First and second histogram intersection scores of the first and second histograms are computed. A combined detection score is computed from the first and second histogram scores. 1. A method of identifying image areas that show a same person in different images , the method comprisingcapturing color pixel values of a first and second image;selecting a first and second image area in the first and second image respectively;computing a first and second primary histogram of pixels in the first and second area respectively, the primary histograms representing sums of contributions from pixels with color values in histogram bins, each histogram bin corresponding to a combination of a range of color values and a range of heights of the pixels in the first or second image area, the ranges of color values being normalized relative to a distribution of color pixel values in the first and second area respectively;computing first and second secondary histograms of pixels in the first and second area respectively, the secondary histograms representing sums of contributions from pixels with color values in further histogram bins for ranges of color values that are at least partly ...

Подробнее
27-06-2013 дата публикации

SURVEY APPARATUS, COMPUTER-READABLE STORAGE MEDIUM AND SURVEY METHOD

Номер: US20130163820A1
Принадлежит:

A determination unit determines, on images of video data where a road set as a survey target was shot in different times, whether or not the shooting position of the image is within a tolerance value with the shooting position of any of the images as a reference, for each image having a corresponding shooting position. When the shooting position is determined to be within the tolerance value, a creation unit creates screen information of a screen where images that has been determined to be within the tolerance value is displayed in synchronization. Moreover, when the shooting position is determined to be beyond the tolerance value, the creation unit creates screen information of a screen where an image that has been determined to be beyond the tolerance value is undisplayed. 1. A survey apparatus comprising:a storage unit that stores a plurality of pieces of image information including a plurality of images of a road, the images having been continuously shot by a shooting unit mounted on a vehicle, and a plurality of pieces of position information indicating shooting positions of the images of the corresponding pieces of image information;a determination unit that determines on images of image information where a road set as a survey target was shot based on the plurality of pieces of image information and the plurality of pieces of position information, which are stored in the storage unit, whether or not the shooting position of the image is within a tolerance value with a shooting position of any of the images as a reference for each image having a corresponding shooting position;a creation unit that, when the determination unit determines that the shooting position is within the tolerance value, creates screen information of a screen where images that has been determined to be within the tolerance value is displayed in synchronization, and when the shooting position is determined to be beyond the tolerance value, creates screen information of a screen where an ...

Подробнее
27-06-2013 дата публикации

METHOD AND DEVICE FOR DETECTING ROAD REGION AS WELL AS METHOD AND DEVICE FOR DETECTING ROAD LINE

Номер: US20130163821A1
Автор: YOU Ganmei, ZHENG Jichuan
Принадлежит:

Disclosed are a road line detection method and a road line detection device. The road line detection method comprises a step of obtaining a first disparity map including one or more road regions and a corresponding V-disparity image; a step of sequentially detecting plural sloped line segments in the corresponding V-disparity image according to a big-to-small order of disparities and a big-to-small order of V-values, to serve as plural sequentially adjacent road surfaces; a step of obtaining a second disparity map of plural road line regions of interest corresponding to the plural sloped line segments; and a step of detecting one or more road lines in the second disparity map of the plural road line regions of interest. 1. A road line detection method comprising:a step of obtaining a first disparity map including one or more road regions and a corresponding V-disparity image;a step of sequentially detecting plural sloped line segments in the corresponding V-disparity image according to a big-to-small order of disparities and a big-to-small order of V-values, to serve as plural sequentially adjacent road surfaces;a step of obtaining a second disparity map of plural road line regions of interest corresponding to the plural sloped line segments; anda step of detecting one or more road lines in the second disparity map of the plural road line regions of interest.2. The method according to claim 1 , further comprising:a step of, for each of the road lines detected in the second disparity map, obtaining points in a U-disparity image, corresponding to this road line; determining whether the obtained points are located on a non-vertical and non-horizontal sloped line; and discarding this road line if it is determined that the obtained points are not located on the sloped line.3. The method according to claim 1 , wherein claim 1 , the step of sequentially detecting the plural sloped line segments in the corresponding V-disparity image includes:a first step of detecting a ...

Подробнее
27-06-2013 дата публикации

Airborne Image Capture and Recognition System

Номер: US20130163822A1
Принадлежит: Cyclops Technologies, Inc.

Provided is a system and method of electronically identifying a license plate and comparing the results to a predetermined database. The software aspect of the system runs on standard PC hardware and can be linked to other applications or databases. It first uses a series of image manipulation techniques to detect, normalize and enhance the image of the number plate. Optical character recognition (OCR) is used to extract the alpha-numeric characters of the license plate. The recognized characters are then compared to databases containing information about the vehicle and/or owner. 1. A non-transitory computer readable medium having computer executable instructions for performing a method comprising:a. maintaining a database of predetermined identification values;b. capturing an image from an imaging device on an airborne vehicle;c. projecting a plurality of polygons onto the captured image;d. capturing at least one polygon projected on the captured image responsive to the detection of the presence of alpha-numeric characters within the at least one of the plurality of polygons projected onto the captured image;e. establishing a recognition value derived from the alpha-numeric characters within the at least one detected polygon;f. storing the recognition value; comparing the recognition value to the predetermined identification values;g. creating an alert responsive to a match between the recognition value and a value in the database of predetermined identification values; andh. communicating the alert to at least one of the airborne vehicle and a land based vehicle.2. The method of further comprising establishing a character substitution table comprising a plurality of commonly mistaken character reads; and creating a plurality of altered recognition values derived from the recognition value and the character substitution table.3. The method of claim 2 , further comprising displaying the image containing alphanumeric characters with the plurality of altered ...

Подробнее
27-06-2013 дата публикации

Image Capture and Recognition System Having Real-Time Secure Communication

Номер: US20130163823A1
Принадлежит: Cyclops Technologies, Inc.

Provided is a system and method of electronically identifying a license plate and comparing the results to a predetermined database. The software aspect of the system runs on standard PC hardware and can be linked to other applications or databases. It first uses a series of image manipulation techniques to detect, normalize and enhance the image of the number plate. Optical character recognition (OCR) is used to extract the alpha-numeric characters of the license plate. The recognized characters are then compared to databases containing information about the vehicle and/or owner. 1. A non-transitory computer readable medium having computer executable instructions for performing a method comprising:a. maintaining a database of predetermined identification values;b. capturing an image from an imaging device;c. projecting a plurality of polygons onto the captured image;d. capturing at least one polygon projected on the captured image responsive to the detection of the presence of alpha-numeric characters within the at least one of the plurality of polygons projected onto the captured image;e. establishing a recognition value derived from the alpha-numeric characters within the at least one detected polygon;f. storing the recognition value; comparing the recognition value to the predetermined identification values;g. creating an alert responsive to a match between the recognition value and a value in the database of predetermined identification values; andh. communicating the alert to at least one remote recipient over a communication protocol selected from the group consisting of SMS (Short Message Service), MIM (Mobile Instant Messaging) and VOIP (Voice Over Internet Protocol).2. The method of further comprising establishing a character substitution table comprising a plurality of commonly mistaken character reads; and creating a plurality of altered recognition values derived from the recognition value and the character substitution table.3. The method of claim 2 , ...

Подробнее
04-07-2013 дата публикации

GUIDANCE DEVICE, GUIDANCE METHOD, AND GUIDANCE PROGRAM

Номер: US20130170706A1
Автор: Mori Toshihiro, SATO Yuji
Принадлежит: AISIN AW CO., LTD.

Image recognition is performed based on a surrounding image and a recognition template used for the image recognition of a marker object, and a recognition confidence level used for determining if the marker object can be recognized in the surrounding image is calculated. A determination is made if the recognition confidence level has increased as compared with the recognition confidence level calculated based on the surrounding image acquired at the guidance output point. If it is determined that the recognition confidence level has increased, the image of the marker object, generated based on the surrounding image acquired at the guidance output point, is stored as a new template to be used for the image recognition of the marker object. This increases the possibility to recognize the marker object based on the new template, thus increasing the recognition accuracy of the marker object. 12-. (canceled)3. A guidance device comprising:a marker object identification unit that identifies a marker object, the marker object being a mark of a guidance target point on a route that is set; an image acquisition unit that acquires a surrounding image of a vehicle; a recognition confidence level calculation unit that performs image recognition based on the surrounding image and a template used for image recognition of the marker object, and calculates a recognition confidence level used to determine if the marker object can be recognized in the surrounding image;an increase determination unit that determines if the recognition confidence level calculated based on the surrounding image acquired between a guidance output point and the guidance target point has increased as compared with the recognition confidence level calculated based on the surrounding image acquired at the guidance output point, the guidance output point being a position located before the guidance target point and provided for outputting guidance on the guidance target point; a storage unit that stores an ...

Подробнее
04-07-2013 дата публикации

Method and System for Video Composition

Номер: US20130170760A1
Принадлежит: Pelco, Inc.

A method of presenting video comprising receiving a plurality of video data from a video source, analyzing the plurality of video data; identifying the presence of foreground-objects that are distinct from background portions in the plurality of video data, classifying the foreground-objects into foreground-object classifications, receiving user input selecting a foreground-object classification, and generating video frames from the plurality of video data containing background portions and only foreground-objects in the selected foreground-object classification. 1. A method of presenting video comprising: receiving a plurality of video data from a video source; analyzing the plurality of video data; identifying the presence of foreground-objects that are distinct from background portions in the plurality of video data; classifying the foreground-objects into foreground-object classifications; receiving user input selecting a foreground-object classification; and generating video frames from the plurality of video data containing background portions and only foreground-objects in the selected foreground-object classification.2. A method as recited in further comprising: processing data associated with a foreground-object in a selected foreground-object classification based on a first update rate;processing data associated with the background portions based on a second update rate;transmitting data associated with a foreground-object in a selected foreground-object classification dynamically; and transmitting data associated with the background portions based on the second update rate, wherein the first update rate is greater than the second update rate,3. A method as recited in further comprising: receiving a user request for a storyboard image for a first foreground-object classified in a selected foreground-object classification; analyzing the generated video frames to obtain a plurality of frames containing the first foreground-object; and generating an image ...

Подробнее
18-07-2013 дата публикации

VEHICLE PERIPHERY MONITORING APPARATUS

Номер: US20130182109A1
Принадлежит: Denso Corporation

A vehicle periphery monitoring apparatus detects the presence of a moving object along a vehicle periphery. The apparatus sets multiple detection lines along a horizontal axis of an image captured by a camera, and detects a brightness change of a pixel along the detection lines. With reference to the brightness change detected along the detection line and a parameter for determining whether such brightness change is caused by the moving object, the apparatus determines the presence of the moving object. In addition, the apparatus changes a determination condition for determining the moving object such that as the number of detection lines along which the brightness change is detected decreases, the harder it is to satisfy the determination condition for determining that the moving object is present. 1. A vehicle periphery monitoring apparatus comprising:a detection unit detecting a brightness change of a pixel along a plurality of detection lines, the plurality of detection lines extending along a right-left axis in an image captured by an in-vehicle camera as an image of a vehicle periphery; anda moving object determination unit determining a presence of the moving object based on the brightness change of the pixel detected by the detection unit and a parameter for evaluating the brightness change to determine whether the brightness change of the pixel along the detection lines is caused by the moving object, whereinthe moving object determination unit changes a determination condition for determining whether the moving object is present based upon a number of the detection lines along which the brightness change is detected by the detection unit, such that as the number of detection lines that have the brightness change decreases, the harder it is to satisfy the determination condition for determining that the moving object is present.2. The vehicle periphery monitoring apparatus of claim 1 , whereinthe moving object determination unit determines that the moving ...

Подробнее
18-07-2013 дата публикации

GRADIENT ESTIMATION APPARATUS, GRADIENT ESTIMATION METHOD, AND GRADIENT ESTIMATION PROGRAM

Номер: US20130182896A1
Автор: AZUMA Takahiro
Принадлежит: Honda elesys Co., Ltd.

A gradient estimation apparatus includes a feature point extracting unit configured to extract feature points on an image captured by an imaging unit, an object detecting unit configured to detect image regions indicating objects from the image captured by the imaging unit, and a gradient calculating unit configured to calculate a gradient of the road surface on which the objects are located, based on the coordinates of the feature points extracted by the feature point extracting unit in the image regions indicating the objects detected by the object detecting unit and the amounts of movements of the coordinates of the feature points over a predetermined time. 1. A gradient estimation apparatus comprising:a feature point extracting unit configured to extract feature points on an image captured by an imaging unit;an object detecting unit configured to detect image regions indicating objects from the image captured by the imaging unit; anda gradient calculating unit configured to calculate the gradient of the road surface on which the objects are located, based on the coordinates of the feature points extracted by the feature point extracting unit in the image indicating the object detected by the object detecting unit and the amount of movement of the coordinates of the feature points over a predetermined time.2. The gradient estimation apparatus according to claim 1 ,wherein the gradient calculating unit divides the sum of the differences in coordinate change ratios over the combinations of discrete two feature point pairs, where a coordinate change ratio indicating the ratio of the coordinates of a feature points extracted by the feature point extracting unit in the image region indicating the object detected by the object detecting unit with respect to the amounts of movement of the coordinates of the feature points over the predetermined time, by the sum of the differences of the inverse numbers of the amounts of movement over combinations of discrete two feature ...

Подробнее
18-07-2013 дата публикации

SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE

Номер: US20130182897A1
Автор: Holz David
Принадлежит:

Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby. 1. A method of identifying a position and shape of an object in three-dimensional (3D) space , the method comprising:capturing an image generated by casting an output from at least one source onto the object;analyzing the image to computationally slice the object into a plurality of two-dimensional (2D) slices, each slice corresponding to a cross-section of the object;identifying shapes and positions of a plurality of cross-sections of the object based at least in part on the image and a location of the at least one source; andreconstructing the position and shape of the object in 3D space based at least in part on a plurality of the identified cross-sectional shapes and positions of the object.2. The method of claim 1 , wherein the at least one source is at least one light source.3. The method of claim 1 , wherein the cross-sectional shape and position of the object is identified by selecting a collection of intersection points generated by analyzing a location of the at least one source and positions of points in the image associated with the 2D slice.4. The method of claim 3 , wherein the intersection points are selected based on a total number of the at least one source.5. The method of claim 4 , wherein the image is a shadow of the object.6. The method of claim 5 , wherein the intersection points are selected based on locations of projection points associated with the intersection points claim 5 , the projection points being projections from the intersection points onto the 2D slice.7. The method of claim 6 , further comprising:splitting the cross-section of the object into multiple regions and using each region to generate at least a portion of the shadow image of the 2D slice ...

Подробнее
18-07-2013 дата публикации

System and method for video content analysis using depth sensing

Номер: US20130182904A1
Принадлежит: Objectvideo Inc

A method and system for performing video content analysis based on two-dimensional image data and depth data are disclosed. Video content analysis may be performed on the two-dimensional image data, and then the depth data may be used along with the results of the video content analysis of the two-dimensional data for tracking and event detection.

Подробнее
18-07-2013 дата публикации

Automatic detection of the number of lanes into which a road is divided

Номер: US20130182957A1
Автор: Artur Wujcicki

This invention concerns a computer-implemented method for determining a number of lanes on a road. The method comprises receiving an image, the image being a photographic image of the road or an image derived from the photographic image of the road, carrying out an analysis of the image to identify lane dividers of the road and determining a number of lanes into which the road is divided from the identification of the lane dividers. The method may comprise determining a confidence level for the determined value for the number of lanes. Map data of a plurality of roads may be generated using the value for the number of lanes determined from the computer-implemented method.

Подробнее
18-07-2013 дата публикации

GPS-BASED MACHINE VISION ROADWAY MARK LOCATOR, INSPECTION APPARATUS, AND MARKER

Номер: US20130184938A1
Принадлежит: Limn Tech LLC

An apparatus for locating, inspecting, or placing marks on a roadway. The apparatus includes a GPS-based machine vision locator for sampling discrete geographical location data of a pre-existing roadway mark evident on the roadway. A computer may determine a continuous smooth geographical location function fitted to the sampled geographical location data. And a marker is responsive to the GPS-based locator and geographical location function for replicating automatically the pre-existing roadway mark onto the roadway. The apparatus is typically part of a moving vehicle. A related method is disclosed for locating, inspecting, and placing marks on a resurfaced roadway. A similar apparatus can be used to guide a vehicle having a snow plow along a snow-covered roadway, or a paving machine along an unpaved roadway surface. 1. An apparatus for determining a geographical location of a roadway mark from a moving vehicle , comprising:at least one vehicle mounted imager responsive to a trigger signal for imaging at least one roadway mark located substantially parallel to a direction of travel of the vehicle to provide a triggered roadway mark image;a GPS antenna;a GPS receiver responsive to the GPS antenna for determining a geographical location of the GPS antenna;an apparatus for providing a GPS receiver synchronized image trigger signal to the vehicle mounted imager; andan apparatus for determining a GPS geographical location of the roadway mark from the triggered roadway mark image and the geographical location of the GPS antenna.2. The apparatus according to wherein the GPS antenna claim 1 , adapted to receive GPS radio wave signals originating from a GPS satellite system or a GPS-pseudolite array claim 1 , is connected to the GPS receiver which decodes the GPS signals for determining the geographical location of the GPS antenna.3. A method for determining a geographical location of a roadway mark from a moving vehicle claim 1 , comprising:imaging at least one roadway mark ...

Подробнее
25-07-2013 дата публикации

IMAGING APPARATUS, VEHICLE SYSTEM HAVING THE SAME, AND IMAGE-PROCESSING METHOD

Номер: US20130188051A1
Принадлежит:

A dynamic expansion operation of an index value image is performed by specifying an index value range before correction (Gmin to Gmax) for one index value image, calculating magnification K for which to be expanded to an ideal index value range (0 to 1023), and correcting an index value before correction G by the magnification K. An effective magnification Kthre to expand a maximum effective index value range (215 to 747) that can be taken by the index value before correction G calculated from transmittance of a filter to the ideal index value range (0 to 1023) is stored, and in a case where the calculated magnification K is smaller than the effective magnification Kthre, the expansion operation is performed by use of the effective magnification Kthre. 2. The imaging apparatus according to claim 1 , wherein the selective filter region is constituted by a polarization filter that selectively transmits a predetermined polarization component.3. The imaging apparatus according to claim 1 , wherein the magnification calculator determines the range of the index value before correction based on frequency distribution information of the index value before correction calculated based on the image signal corresponding to the one index value image outputted from the imaging device.4. The imaging apparatus according to claim 3 , wherein the magnification calculator extracts a range of the index value in which frequency exceeds a predetermined frequency threshold value based on the frequency distribution information claim 3 , and determines the extracted range as the range of the index value before correction.5. The imaging apparatus according to claim 1 , wherein the expansion operation device has a memory that stores input data in a specified write address claim 1 , and outputs data stored in a specified readout address claim 1 , and stores data of the index value after correction corrected by use of the magnification calculated by the magnification calculator in a write ...

Подробнее
25-07-2013 дата публикации

ANALYSIS APPARATUS, ANALYSIS METHOD, AND STORAGE MEDIUM

Номер: US20130188829A1
Автор: Kamei Yoichi
Принадлежит: CANON KABUSHIKI KAISHA

An analysis apparatus analyzes an image and performs counting of the number of object passages. The analysis apparatus executes the counting and outputs the execution status of the counting. 1. An analysis apparatus for analyzing an image and performing counting of the number of object passages , comprising:an execution unit configured to execute the counting; andan output unit configured to output an execution status of the counting by said execution unit.2. The apparatus according to claim 1 , wherein said output unit outputs the execution status representing whether the counting is progressing or has stopped.3. The apparatus according to claim 1 , further comprising a determination unit configured to determine claim 1 , based on a capturing range of a capturing unit configured to capture and acquire the image claim 1 , whether the counting is executable by said execution unit claim 1 ,wherein said output unit outputs the execution status based on a determination result of said determination unit.4. The apparatus according to claim 1 , further comprising a setting unit configured to set claim 1 , in accordance with stop of the counting by said execution unit claim 1 , a counting value to a value representing that the counting has stopped.5. The apparatus according to claim 1 , further comprising:a storage unit configured to store a start time of the counting by said execution unit; anda reset unit configured to reset the start time when the counting by said execution unit has been stopped,wherein said output unit outputs the start time as the execution status of the counting.6. The apparatus according to claim 1 , wherein said output unit outputs a stop time of the counting by said execution unit as the execution status of the counting.7. An analysis apparatus for analyzing an image and performing counting of the number of object passages claim 1 , comprising:an execution unit configured to execute the counting; anda holding unit configured to hold an execution ...

Подробнее
25-07-2013 дата публикации

METHOD AND DEVICE FOR DETERMINING WHEEL AND BODY MOTIONS OF A VEHICLE

Номер: US20130188839A1
Принадлежит:

A method for determining wheel and body motions of a vehicle having a body and at least one wheel includes inducing a motion of the vehicle, recording an image sequence of the moving vehicle, determining the optical flow from the recorded image sequence, and determining the position of at least one wheel center, the motion of the body and/or a damping ratio of the vehicle from the optical flow. 110-. (canceled)11. A method for determining wheel and body motions of a vehicle having a body and at least one wheel , the method comprising:inducing a motion of the vehicle;recording a sequence of images of the moving vehicle;determining an optical flow from the recorded images of the image sequence; anddetermining from the optical flow, at least one of: i) a position of at least one wheel center, ii) a motion of the body, and iii) a damping ratio of the vehicle.12. The method as recited in claim 11 , wherein the position of at least one wheel center claim 11 , the motion of the body and the damping ratio are determined simultaneously.13. The method as recited in claim 11 , further comprising:eliminating geometric distortions in the recorded images.14. The method as recited in claim 11 , wherein the determining of the optical flow includes segmenting a flow field.15. The method as recited in claim 14 , wherein the segmenting includes segmenting the flow field into flow vectors on the wheel claim 14 , flow vectors on the body claim 14 , and flow vectors that are situated neither on the wheel claim 14 , nor on the body.16. The method as recited in claim 14 , wherein the determining includes using a Gauss-Markov model in accordance with least squares.17. A measuring device for determining wheel and body motions of a vehicle claim 14 , which has a body and at least one wheel claim 14 , the measuring device comprising:at least one camera configured to record a sequence of images of the vehicle;a computation device configured to calculate an optical flow from the recorded image ...

Подробнее
01-08-2013 дата публикации

SITUATION DETERMINING APPARATUS, SITUATION DETERMINING METHOD, SITUATION DETERMINING PROGRAM, ABNORMALITY DETERMINING APPARATUS, ABNORMALITY DETERMINING METHOD, ABNORMALITY DETERMINING PROGRAM, AND CONGESTION ESTIMATING APPARATUS

Номер: US20130195364A1
Принадлежит: Panasonic Corporation

A congestion estimating apparatus includes an area dividing unit that divides a moving image into partial areas. A movement information determining unit determines whether there is movement, and a person information determining unit determines whether there is a person, in each partial area. A staying determining unit determines a state for each partial area. The staying determining unit determines the state as a movement area in which there is a movement of person when there is movement and there is a person; and determines the state as a noise area when there is movement and there is no person; and determines the state as a staying area in which there is a person who is staying when there is no movement and there is a person; and determines the state as a background area in which there is no person when there is no movement and there no person. 1. A congestion estimating apparatus comprising:an area dividing unit that divides a moving image into partial areas;a movement information determining unit that determines whether or not there is a movement in each of the partial areas;a person information determining unit that determines whether or not there is a person in each of the partial areas; anda staying determining unit that receives determination results from the movement information determining unit and the person information determining unit to determine a state of area for each of the partial areas, whereinthe staying determining unit determines the state of area as a movement area in which there is a movement of person when the movement information determining unit determines that there is a movement and the person information determining unit determines that there is a person,the staying determining unit determines the state of area as a noise area when the movement information determining unit determines that there is a movement and the person information determining unit determines that there is no person,the staying determining unit determines the state ...

Подробнее
08-08-2013 дата публикации

ASSISTED VIDEO SURVEILLANCE OF PERSONS-OF-INTEREST

Номер: US20130201329A1
Принадлежит: Massachusetts Institute of Technology

Methods, systems and media are described for computer-assisted video surveillance. Methods may support detection of moving persons in video frames, extraction of features of the detected moving persons and identification of which detected moving persons are likely matches to a person of interest. Identification of the likely matches may be determined using an attribute-based search, and/or using a specific person-based search. The method may include using likely matches confirmed as images of the person of interest to reconstruct a path of the person of interest. 1. A computer-implemented method for analyzing surveillance video data , the method comprising:detecting one or more moving persons in a frame for each frame in a plurality of frames of the video data;creating a record including image data from a subsection of the frame associated with the detected moving person for each detected moving person and for each frame;calculating values for a plurality of attributes characterizing each detected moving person by employing a probabilistic model that matches attributes to image data in the record associated with the detected moving person;receiving an attribute profile including a value for at least one attribute of a person, wherein the plurality of attributes comprises the at least one attribute;calculating a score for each record based on a comparison of the received attribute profile with the calculated values for the plurality of attributes for the record; andidentifying one or more records as candidate matches to the person of interest based on the calculated scores.2. The computer-implemented method of claim 1 , wherein the plurality of attributes includes attributes regarding non-head portions of a person.3. The computer-implemented method of claim 1 , wherein the plurality of attributes includes a-full body attribute of a person.4. The computer-implemented method of claim 1 , further comprising:displaying the one or more candidate match records to a user; ...

Подробнее
08-08-2013 дата публикации

ASSISTED VIDEO SURVEILLANCE OF PERSONS-OF-INTEREST

Номер: US20130201330A1
Принадлежит: Massachusetts Institute of Technology

Methods, systems and media are described for computer-assisted video surveillance. Methods may support detection of moving persons in video frames, extraction of features of the detected moving persons and identification of which detected moving persons are likely matches to a person of interest. Identification of the likely matches may be determined using an attribute-based search, and/or using a specific person-based search. The method may include using likely matches confirmed as images of the person of interest to reconstruct a path of the person of interest. 1. A computer implemented method for analyzing video data including a plurality of people , the method comprising:receiving an identification of a person of interest in an image;detecting one or more moving persons in a frame for each frame in a plurality of frames of comparison video data, wherein the plurality of frames of comparison video data does not include an image of the person of interest;creating a record including image data from a subsection of the frame associated with the detected moving person for each of the detected moving persons and for each of the plurality of frames of comparison video data;creating a person-specific detector for the person of interest by training a support vector machine classifier using at least one image of the person of interest as positive training data and the records of the detected moving persons in plurality of frames of comparison video data as negative training data; andapplying the person-specific detector to a plurality of records from video data of interest to identify one or more records as candidate matches to the person of interest.2. The computer-implemented method of claim 1 , further comprising:detecting one or more moving persons in a frame for each in a plurality of frames of the video data of interest; andcreating a record including image data from a subsection of the frame associated with the detected moving person for each detected moving person ...

Подробнее
22-08-2013 дата публикации

END-TO-END VISUAL RECOGNITION SYSTEM AND METHODS

Номер: US20130215264A1
Автор: LEE Taehee, Soatto Stefano

We describe an end-to-end visual recognition system, where “end-to-end” refers to the ability of the system of performing all aspects of the system, from the construction of “maps” of scenes, or “models” of objects from training data, to the determination of the class, identity, location and other inferred parameters from test data. Our visual recognition system is capable of operating on a mobile hand-held device, such as a mobile phone, tablet or other portable device equipped with sensing and computing power. Our system employs a video based feature descriptor, and we characterize its invariance and discriminative properties. Feature selection and tracking are performed in real-time, and used to train a template-based classifier during a capture phase prompted by the user. During normal operation, the system scores objects in the field of view based on their ranking. 1. A visual recognition apparatus for identifying objects captured in a video stream having a captured time period , the apparatus comprising:an image sensor configured for capturing a video stream;a computer processor; and capturing the video stream from said image sensor;', 'associating each frame in an image with a corresponding frame in temporally adjacent images, or in images taken from nearby vantage points; and', 'temporally aggregating statistics computed at one or more collections of temporally corresponding frames, into a descriptor., 'programming for processing said video stream to perform visual recognition by performing steps comprising2. The apparatus recited in claim 1 , wherein said temporal aggregating of statistics is performed by computing a mean claim 1 , or median claim 1 , or mode claim 1 , or sample histogram of a contrast-invariant function of the image in said frames.3. The apparatus recited in claim 1 , wherein said programming performs steps comprising:spatially aggregating such statistics into a representation that is insensitive to nuisance factor and distinctive; ...

Подробнее
22-08-2013 дата публикации

IMAGING SYSTEM AND IMAGING METHOD

Номер: US20130216099A1
Автор: Sugai Takashi
Принадлежит: CANON KABUSHIKI KAISHA

An imaging system comprises a whole image read out unit for reading out a whole image in a first resolution from an imaging device, a partial image region selecting unit for selecting a region of a partial image in a part of the whole image which is read out, a partial image read out unit for reading out the partial image in the selected region in a second resolution from the imaging device, a characteristic region setting unit for setting a characteristic region, in which a characteristic object exists, within the partial image, a characteristic region image read out unit for reading out an image of the characteristic region, which is set, in a third resolution from the imaging device, and a resolution setting unit for setting such that the first resolution Подробнее

29-08-2013 дата публикации

EXTERIOR ENVIRONMENT RECOGNITION DEVICE

Номер: US20130223689A1
Автор: Saito Toru
Принадлежит: FUJI JUKOGYO KABUSHIKI KAISHA

Provided is an exterior environment recognition device including: a parallax deriving unit for obtaining parallax by means of the pattern matching; a position information deriving unit for deriving a relative distance from the parallax; a grouping unit for grouping a block of which difference of the relative distance is included within a predetermined range, and specifying specified objects; a specified object selection unit for selecting a specified object; an offset amount deriving unit for, when the relative distance of the selected specified object becomes less than a threshold value determined in advance, deriving an amount of offset in accordance with the relative distance; and an offset execution unit for offsetting the image by the amount of offset. When the amount of offset is not zero, the position information deriving unit limits deriving of the relative distance in an image other than an image range corresponding to the selected specified object. 1. An exterior environment recognition device comprising:an image data obtaining unit for obtaining two pieces of image data generated by two image-capturing units of which stereo axes are on a same plane but provided at different positions;a parallax deriving unit for comparing the two pieces of image data, extracting a block highly correlated with any given block including one or more pixels in an image based on one of the image data, from a search range, of which size is determined in advance, in an image based on the other of the image data, and obtaining parallax of both of the blocks;a position information deriving unit for deriving a relative distance of the block from the parallax;a grouping unit for grouping a block of which difference of the relative distance is included within a predetermined range, and specifying one or more specified objects;a specified object selection unit for selecting a specified object which is to be a tracking target, from the one or more specified objects;an offset amount ...

Подробнее
05-09-2013 дата публикации

AUTOMATED TRACK SURVEYING AND DITCHING

Номер: US20130230212A1
Принадлежит: HERZOG RAILROAD SERVICES, INC.

A method of surveying a section of a railway to determine amounts of soil to be excavated or added at selected position coordinates of track locations includes moving a survey vehicle along the railway, optically scanning the track structure at selected intervals to obtain optical data points with position coordinates, recording images at the intervals with position coordinates, recording position coordinates of drainage points, processing the optical data points to derive ditch overlays formed by ditch profiles associated with locations along the track and ditch templates, detecting anomalous soil unit weights associated with track locations, reviewing images associated with the locations of the anomalous units, adjusting the ditch overlays as necessary, and loading the adjusted data into a computer of an excavator device for display to guide an excavator operator in reshaping the ditches along the track according to the detected position of the excavator device. 1. A method for automated track surveying and maintaining ditching along a railway and comprising the steps of:(a) moving a survey vehicle along a section of said railway;(b) obtaining survey vehicle position coordinates of said survey vehicle at intervals spaced along said section of railway using a survey vehicle position coordinate system;(c) optically scanning said railway at each of said intervals to obtain optical data points, each of said optical data points having position coordinates associated therewith;(d) recording an image of said railway at selected ones of said intervals along said railway section, each image having position coordinates associated therewith;(e) entering locations of drainage points along said railway section;(f) processing said optical data points to obtain ditch profiles corresponding to said intervals;(g) providing ditch templates associated with said intervals corresponding to ditch contours required to promote positive drainage toward respective ones of said drainage ...

Подробнее
12-09-2013 дата публикации

Image processing device, image processing method, and image processing program

Номер: US20130235195A1
Принадлежит: Omron Corp

An image processing device has an image input part to which a frame image of an imaging area taken with an infrared camera is input, a background model storage part in which a background model is stored with respect to each pixel of the frame image input to the image input part, a frequency of a pixel value of the pixel being modeled in the background model, a background difference image generator that determines whether each pixel of the frame image input to the image input part is a foreground pixel or a background pixel using the background model of the pixel, which is stored in the background model storage part, and generates a background difference image, and an object detector that sets a foreground region and detects an imaged object based on the foreground pixel in the background difference image generated by the background difference image generator.

Подробнее
19-09-2013 дата публикации

IMAGE MONITORING SYSTEM AND IMAGE MONITORING PROGRAM

Номер: US20130243253A1
Принадлежит: SONY CORPORATION

An image monitoring system includes a recorder that records an image captured by a camera via a network. The system is controlled to display the present image captured by the camera or a past image recorded on the recorder. A moving object is detected from the image captured by the camera, the detector including a resolution converter for generating an image with a resolution lower than the resolution of the image captured by the camera. A moving object is detected from the image generated by the resolution converter and positional information on the detected moving object is output. The positional information of the detected moving object is merged with the image captured by the camera on the basis of the positional information. 15-. (canceled)6. An image monitoring system comprising:recording means for recording an image captured by a camera via a network;control means for controlling the system so as to display the present image captured by the camera or a past image recorded on the recording means on display means; andmoving-object detecting means for detecting a moving object from the image captured by the camera; resolution conversion means for generating an image with a resolution lower than the resolution of the image captured by the camera,', 'positional-information output means for detecting a moving object from the image generated by the resolution conversion means and outputting positional information on the detected moving object, and', 'information merging means for merging the positional information of the moving object with the image captured by the camera on the basis of the positional information of the moving object output by the positional-information output means., 'wherein the moving-object detecting means includes'}7. he image monitoring system according to claim 6 , whereinthe information merging means merges the positional information in agreement with the resolution of the image displayed on the display means.8. An image monitoring program ...

Подробнее
19-09-2013 дата публикации

Foreground Analysis Based on Tracking Information

Номер: US20130243254A1

Techniques for performing foreground analysis are provided. The techniques include identifying a region of interest in a video scene, detecting a static foreground object in the region of interest, and determining whether the static foreground object is abandoned or removed, wherein said determining comprises performing a foreground analysis based on tracking information and pruning one or more false alarms using one or more track statistics. 1. A method for performing foreground analysis , wherein the method comprises:identifying a region of interest in a video scene;detecting a static foreground object in the region of interest; anddetermining whether the static foreground object is abandoned or removed, wherein said determining comprises performing a foreground analysis based on tracking information and pruning one or more false alarms using one or more track statistics.2. The method of claim 1 , wherein identifying a region of interest in a video scene comprises enabling a user to manually draw a region of interest in a video scene.3. The method of claim 1 , wherein pruning one or more false alarms using one or more track statistics comprises using information from an object tracking algorithm.4. The method of claim 1 , further comprising determining whether a static foreground object determined to be abandoned meets user-defined criteria.5. The method of claim 4 , further comprising triggering an alarm if the static foreground object determined to be abandoned meets the user-defined criteria.6. The method of claim 1 , further comprising verifying the detection of a static foreground object determined to be abandoned using a tracker.7. The method of claim 6 , wherein using a tracker to verify the detection of a static foreground object determined to be abandoned comprises:using the tracker to track each of one or more moving objects in the region of interest to produce one or more corresponding trajectories; andquerying the tracker with the static foreground ...

Подробнее
19-09-2013 дата публикации

IMAGE PROCESSING METHOD

Номер: US20130243322A1
Принадлежит:

An image processing method of separating an input image into a foreground image and a background image, the method including determining a pixel of the input image as a pixel of the foreground image if a foreground probability value of the pixel of the foreground image determined by using the Gaussian mixture model or the pixel determined to be included in a motion region is greater than a setting threshold. 1. An image processing method comprising:(a) determining whether a pixel of an input image is a pixel of a foreground image or a pixel of the background image by using a Gaussian mixture model;(b) determining whether the pixel of the input image is included in a motion region;(c) obtaining a foreground probability value of the pixel of the input image according to a foreground probability histogram with respect to a correlation between the input image and a reference image of the input image; and(d) determining the pixel of the input image as the pixel of the foreground image if the foreground probability value of the pixel of the foreground image determined by using the Gaussian mixture model or the pixel determined to be included in the motion region is greater than a setting threshold.2. The method of claim 1 , wherein the input image comprises a series of frame images of video claim 1 , andwherein operations (a) through (d) are repeatedly performed.3. The method of claim 2 , wherein claim 2 , in operation (c) claim 2 , the reference image is separated into a foreground and a background claim 2 , and is renewed as a previous image during a process of repeatedly performing operations (a) through (d).4. The method of claim 1 , wherein claim 1 , in operation (c) claim 1 , the correlation between the input image and the reference image is calculated with respect to the pixel of the input image by using a texture value of the pixel that is a difference between a mean gradation value of the pixel and neighboring pixels and gradation values of the neighboring pixels ...

Подробнее
19-09-2013 дата публикации

ELECTRONIC APPARATUS, REPRODUCTION CONTROL SYSTEM, REPRODUCTION CONTROL METHOD, AND PROGRAM THEREFOR

Номер: US20130243407A1
Принадлежит: SONY CORPORATION

Provided is an electronic apparatus including: a storage to store first and second contents, each of which includes scenes, and meta-information items each indicating a feature of each scene of the first and second contents; a reproducer to reproduce the first and second contents; an operation receiver to receive an input of an operation by a user; and a controller to control the storage to store an operation-history information item indicating an operation history of the user for each scene during reproduction of the first content while it is associated with the meta-information item of each scene, to calculate a similarity between scenes of the first and second contents based on the meta-information items, and to control the reproducer to change a reproduction mode for each scene based on the operation-history information item and the similarity during reproduction of the second content. 1. An electronic apparatus , comprising:a storage configured to store content including a plurality of scenes, and meta-information items each indicating a feature of at least one of the scenes;an operation receiver configured to receive an input of an operation; anda controller configured to control the storage to store an operation-history information item indicating an operation history for a first scene of the scenes while the operation-history information item is associated with the meta-information item of the first scene and to calculate a similarity between the first scene and a second scene based on the meta-information items.2. The electronic apparatus according to claim 1 , wherein the controller is configured to control a reproducer to change a reproduction mode for the second scene based on the operation-history information item and the calculated similarity during reproduction of the second scene.3. The electronic apparatus according to claim 1 , wherein the meta-information items each indicate a feature of each of the plurality of scenes.4. The electronic apparatus ...

Подробнее
26-09-2013 дата публикации

METHOD AND SYSTEM FOR EVALUATING BRIGHTNESS VALUES IN SENSOR IMAGES OF IMAGE-EVALUATING ADAPTIVE CRUISE CONTROL SYSTEMS, ESPECIALLY WITH RESPECT TO DAY/NIGHT DISTINCTION

Номер: US20130251208A1
Принадлежит: BENDIX COMMERCIAL VEHICLE SYSTEMS LLC

The invention proposes a method and an arrangement for evaluating sensor images of an image-evaluating environment recognition system on a carrier, in which, in order to distinguish the light conditions in the area of the image-evaluating environment recognition system with regard to day or night, at least the gain and/or the exposure time of the at least one image sensor detecting the environment is/are monitored, a profile of the gain and/or the exposure time against time with relatively high gain or relatively long exposure times characterizing night-time light conditions, and a profile of the gain and/or the exposure time with relatively low gain and/or relatively short exposure times characterizing daytime light conditions. The environment recognition system according to the invention can also be used to search the detected environment for bright objects, the headlights of another carrier being used as additional information, for example. 111.-. (canceled)12. A method for setting a parameter of an image-based driving assistance system on a vehicle , the driver assistance system including a lane-keeping assistance system , the method comprising:creating a state recognition profile including a plurality of respective previous instantaneous gains and previous instantaneous exposure times associated with one of an instantaneous daytime state and an instantaneous night-time state;recording a current frame;reading at least one of a current instantaneous gain and a current instantaneous exposure time associated with the frame;identifying, based on the state recognition profile, if the at least one of the current instantaneous gain and the current instantaneous exposure time correspond to one of the instantaneous daytime state, the instantaneous night-time state, and a neither daytime nor night-time state;if the at least one of the current instantaneous gain and the current instantaneous exposure time correspond to one of the instantaneous daytime state and the ...

Подробнее
26-09-2013 дата публикации

IMAGE PROCESSING APPARATUS AND METHOD FOR VEHICLE

Номер: US20130251209A1
Автор: Kim Byungho
Принадлежит: CORE LOGIC INC.

Embodiments of the invention relate to an image processing apparatus and method of a black box system for vehicles, which can simplify an analysis stage without causing any Doppler effect by directly analyzing an image of a camera basically mounted to the black box for vehicles, and which includes a unit for detecting an accident risk before a sudden braking operation and occurrence of an accident. The image processing apparatus includes: a subject distance change detector which analyzes a size change of a subject present in an image captured by a camera to detect a distance change between the camera and the subject; a light source analyzer which analyzes a light source present in the image; an image divider which divides the image into plural sections to apply a differently weighted accident-risk level value to each of the divided sections; and an alarm unit for generating an alarm corresponding to an accident-risk situation in the divided sections. 1. An image processing apparatus for vehicles , comprising:an image divider which stores predetermined divided sections to which differently weighted accident-risk level values are applied;a subject distance change detector which analyzes a size change of a subject present in an image captured by a camera to detect a distance change between the camera and the subject;a light source analyzer which analyzes a light source present in the image; anda controller which determines an accident-risk situation in the predetermined divided sections based on signals from the subject distance change detector, the light source analyzer, and the image divider.2. The image processing apparatus according to claim 1 , further comprising an alarm unit for generating an alarm according to an accident-risk determination result.3. The image processing apparatus according to claim 1 , wherein the light source analyzer analyzes a contrast of a background image.4. The image processing apparatus according to claim 1 , wherein the light source ...

Подробнее
03-10-2013 дата публикации

Eye Gaze Based Location Selection for Audio Visual Playback

Номер: US20130259312A1
Принадлежит: Intel Corp

In response to the detection of what the user is looking at on a display screen, the playback of audio or visual media associated with that region may be modified. For example, video in the region the user is looking at may be sped up or slowed down. A still image in the region of interest may be transformed into a moving picture. Audio associated with an object depicted in the region of interest on the display screen may be activated in response to user gaze detection.

Подробнее
03-10-2013 дата публикации

IDENTIFYING SPATIAL LOCATIONS OF EVENTS WITHIN VIDEO IMAGE DATA

Номер: US20130259316A1

An invention for identifying a spatial location of an event within video image data is provided. Disclosed are embodiments for generating trajectory data of a trajectory of an object for a plurality of pixel regions of an area of interest within video image data, the generating comprising: identifying one or more pixel regions from the plurality of pixel regions containing trajectory data; performing a multi-point neighborhood scan within the one or more pixel regions from the plurality of pixel regions containing trajectory data; and generating a transition chain code based on the multi-point neighborhood scan. Embodiments further generate a set of compressed spatial representations of the trajectory data of the trajectory of the object for an event based on the transition chain code, and generate a lossless contour code of the trajectory data of the trajectory of the object for the event based on the transition chain code. 1. A method for identifying a spatial location of an event within video image data comprising: identifying one or more pixel regions from the plurality of pixel regions containing trajectory data;', 'performing a multi-point neighborhood scan within the one or more pixel regions from the plurality of pixel regions containing trajectory data; and', 'generating a transition chain code based on the multi-point neighborhood scan, the transition chain code defining a direction of trajectory of the object;, 'generating trajectory data of a trajectory of an object for a plurality of pixel regions of an area of interest within video image data, the generating comprisinggenerating a set of compressed spatial representations of the trajectory data of the trajectory of the object for an event based on the transition chain code; andgenerating a lossless contour code of the trajectory data of the trajectory of the object for the event based on the transition chain code.2. The method according to claim 1 , further comprising inputting each of the set of ...

Подробнее
03-10-2013 дата публикации

A SYSTEM AND METHOD FOR TRACKING MOVING OBJECTS

Номер: US20130259440A2
Автор: KARAZI Uri
Принадлежит:

A method for tracking an object that is embedded within images of a scene, including: in a sensor unit, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, receiving the images, receiving a command for selecting an object of interest in a given image and determining object data associated with the object and transmitting the object data to the sensor unit. In the sensor unit, identifying the given image and the object of interest using the object data, and tracking the object in other images. If the object cannot be located in the latest image of the stored succession of images, using information of images in which the object was located to predict estimated real-time location thereof and generating direction commands to the movable sensor for generating realtime images of the scene and locking on the object. 1. A method of selecting a moving object within images of a scene , comprising:a. receiving a succession of images;b. freezing or slowing down a rate of a given image of said succession of images and selecting an object of interest in the given image, as if said object is stationary; andc. determining object data associated with said object.2. The method according claim 1 , wherein said receiving is through a communication link.3. The method according to claim 2 , further comprising: transmitting through said link at least said object data.4. The method according to claim 1 , further comprising zooming the given image after it has been frozen.5. The method according to claim 1 , further comprising enhancing the given image.6. The method according to claim 1 , further comprising: pointing in the vicinity of said object thereby zooming the given image according to image boundaries substantially defined by said pointing claim 1 , and selecting said object by pointing thereto on the zoomed given image.7. The method according to claim 1 , further comprising: using the object data for tracking ...

Подробнее
10-10-2013 дата публикации

VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND RECORDING MEDIUM

Номер: US20130265420A1
Автор: Adachi Keiji
Принадлежит: CANON KABUSHIKI KAISHA

A video processing apparatus comprising: a setting unit that sets one of a line and a graphic pattern on a display screen of a video; and a detection unit that detects, in accordance with an angle of one of the line and the graphic pattern set by the setting unit, a specific object from video data to display the video on the display screen. 1. A video processing apparatus comprising:a setting unit that sets one of a line and a graphic pattern on a display screen of a video; anda detection unit that detects, in accordance with an angle of one of the line and the graphic pattern set by said setting unit, a specific object from video data to display the video on the display screen.2. The apparatus according to claim 1 , further comprising a storage unit that stores a plurality of types of patterns to be used by said detection unit to detect the specific object from the video data claim 1 ,wherein said detection unit detects the specific object using a pattern according to the angle of one of the line and the graphic pattern set by said setting unit out of the plurality of types of patterns stored in said storage unit.3. The apparatus according to claim 1 , further comprising a storage unit that stores a plurality of types of patterns to be used by said detection unit to detect the specific object from the display screen claim 1 ,wherein said detection unit detects the specific object using the plurality of types of patterns stored in said storage unit in an order decided in accordance with the angle of one of the line and the graphic pattern set by said setting unit.4. The apparatus according to claim 1 , further comprising a prediction unit that predicts a position on the display screen to which the object on the display screen moves claim 1 ,wherein said setting unit sets a plurality of lines on the display screen, andsaid detection unit selects one line out of the plurality of lines set by said setting unit based on the position on the display screen predicted by ...

Подробнее
10-10-2013 дата публикации

OBJECT TRACKING AND BEST SHOT DETECTION SYSTEM

Номер: US20130266181A1
Принадлежит: OBJECTVIDEO, INC.

A method and system using face tracking and object tracking is disclosed. The method and system use face tracking, location, and/or recognition to enhance object tracking, and use object tracking and/or location to enhance face tracking. 1. A method of automatically tracking a target , the method comprising:using an object tracking process to track a first object corresponding to a first target during a first period of time;capturing a first face image of the first object at a first time during the first period of time;storing the first face image of the first object at a computer system, and associating the first face image with the first target;capturing a second face image at a second time during the first period of time, the second face image corresponding in space with the tracked first object;comparing the second face image to the first face image to determine whether the second face image and the first face image correspond to the same target; andwhen the second face image and first face image are determined to correspond to the same target, confirming that the first target still corresponds to the first object.2. The method of claim 1 , wherein the confirming includes storing a target confirmation entry in a tracking log.3. The method of claim 1 , wherein the comparing includes performing face recognition on the second face image to determine whether the second face image corresponds to the same target as the first face image.4. The method of claim 3 , wherein the first face image is a best face image captured during the first period of time claim 3 , which best face image is automatically determined to be a best face image from a group of images.5. The method of claim 1 , further comprising:using one or more cameras to track the first object during the first period of time at the facility.6. The method of claim 1 , further comprising:tracking the first target at a first facility.7. The method of claim 1 , further comprising:during tracking the first target, ...

Подробнее
17-10-2013 дата публикации

MULTI-VIEW OBJECT DETECTION USING APPEARANCE MODEL TRANSFER FROM SIMILAR SCENES

Номер: US20130272573A1
Принадлежит:

View-specific object detectors are learned as a function of scene geometry and object motion patterns. Motion directions are determined for object images extracted from a training dataset and collected from different camera scene viewpoints. The object images are categorized into clusters as a function of similarities of their determined motion directions, the object images in each cluster are acquired from the same camera scene viewpoint. Zenith angles are estimated for object image poses in the clusters relative to a position of a horizon in the cluster camera scene viewpoint, and azimuth angles of the poses as a function of a relation of the determined motion directions of the clustered images to the cluster camera scene viewpoint. Detectors are thus built for recognizing objects in input video, one for each of the clusters, and associated with the estimated zenith angles and azimuth angles of the poses of the respective clusters. 1. A method for learning a plurality of view-specific object detectors as a function of scene geometry and object motion patterns , the method comprising:determining via a processing unit motion directions for each of a plurality of object images that are extracted from a source training video dataset input and that each have size and motion dimension values that meet an expected criterion of an object of interest, wherein the object images are collected from each of a plurality of different camera scene viewpoints;categorizing via the processing unit the plurality of object images into a plurality of clusters as a function of similarities of their determined motion directions, wherein the object images in each of the clusters are also acquired from one of the different camera scene viewpoints;estimating via the processing unit zenith angles for poses of the object images in each of the clusters relative to a position of a horizon in the camera scene viewpoint from which the clustered object images are acquired, and azimuth angles of ...

Подробнее
17-10-2013 дата публикации

LANE RECOGNITION DEVICE

Номер: US20130272577A1
Автор: Sakamoto Yosuke
Принадлежит: HONDA MOTOR CO., LTD.

Provided is a lane recognition device capable of extracting linear elements derived from lane marks from a linear element extraction image obtained by processing a captured image and recognizing lane boundary lines. Local areas are set for a lane extraction area set in a linear element extraction image that each of linear elements is included in one or a plurality of the local areas , having a predetermined size, and a local straight line of each local area is determined (vx) and (Ψ) associated with the direction and the intersection x with a predetermined reference horizontal line are calculated for each local straight line . Each local straight line is defined as one vote, being casted to (vx, Ψ) of a voting space . Lane boundary lines are recognized from detection straight lines, whose direction and the intersection x determined based on vote results. 1. A lane recognition device which recognizes a lane on the basis of a captured image of a view ahead of a vehicle obtained by an imaging device , comprising:a linear element extraction image generating unit which generates a linear element extraction image into which linear elements included in the captured image have been extracted;an area setting unit which sets a lane extraction area in a predetermined range in a vertical direction in the linear element extraction image;a local straight line determining unit which determines a local straight line of each local area on the basis of a linear element part in each local area with respect to each local area in the lane extraction area;an intersection calculating unit which calculates an intersection of each local straight line and a reference horizontal line at a predetermined position in a vertical direction;a voting unit which votes on a direction and an intersection of each local straight line in a voting space having a direction and an intersection as coordinate components;a detecting unit which detects a detection straight line on the basis of a voting result in ...

Подробнее
17-10-2013 дата публикации

Analytics Assisted Encoding

Номер: US20130272620A1
Принадлежит:

Video analytics may be used to assist video encoding by selectively encoding only portions of a frame and using, instead, previously encoded portions. Previously encoded portions may be used when succeeding frames have a level of motion less than a threshold. In such case, all or part of succeeding frames may not be encoded, increasing bandwidth and speed in some embodiments. 1. A method comprising:analyzing a frame using video analytics to identify a portion of the frame having motion below a threshold;encoding the rest of the frame without encoding said portion; andfor said unencoded portion, reusing encoding for a corresponding portion from a previous frame.2. The method of wherein using video analytics includes using at least one of erosion claim 1 , dilation claim 1 , or convolution.3. The method of including receiving a plurality of simultaneous input video channels.4. The method of including copying each of said channels.5. The method of including storing one copy on an external memory.6. The method of including storing another copy on an internal memory.7. The method of including storing on said internal memory using two dimensional addressing.8. The method of including specifying a point in said internal memory and an extent in two dimensions.9. The method of including accessing said external memory for encoding.10. The method of including using one copy for encoding and the other copy for video analytics.11. A non-transitory computer readable medium storing instructions to enable a computer processor to:analyze a frame using video analytics to identify a portion of the frame having motion below a threshold;encode the rest of the frame without encoding said portion; andfor said unencoded portion, reusing coding for a corresponding portion from a previous frame.12. The medium of further storing instructions to use video analytics by using at least one of erosion claim 11 , dilation claim 11 , or convolution.13. The medium of further storing instructions to ...

Подробнее
24-10-2013 дата публикации

VIDEO WATERMARKING

Номер: US20130279741A1
Принадлежит:

A method of watermarking a video signal includes encoding the video signal using at least one encoding parameter that is time-varied according to a watermarking pattern. The parameter affects information lost while encoding the signal. The parameter may be a quantization factor corresponding to a particular coefficient of an encoding transform. The parameter may be an element of a quantization matrix corresponding to a particular coefficient in a block DCT transform. The method may be implemented in devices with limited processing resources by means of a software update. The method enables the devices to imprint an encoded signal with a robust watermark, which may survive subsequent decompression and recompression. Alternatively, a video signal may be watermarked by modifying a magnitude of a non-dc spatial frequency component in a manner which varies with time according to a watermarking pattern. Corresponding watermark detection methods and watermarking devices also are disclosed. 1. (canceled)2. (canceled)3. (canceled)4. (canceled)5. (canceled)6. (canceled)7. (canceled)8. (canceled)9. (canceled)10. (canceled)11. (canceled)12. (canceled)13. (canceled)14. (canceled)15. (canceled)16. (canceled)17. (canceled)18. (canceled)19. (canceled)20. (canceled)21. (canceled)22. (canceled)23. (canceled)24. (canceled)25. (canceled)26. (canceled)27. (canceled)28. (canceled)29. (canceled)30. (canceled)31. A method of watermarking a video signal , the method comprising:encoding the video signal using a plurality of encoding parameters; andduring the encoding, varying a value of at least one of the parameters with time according to a watermarking pattern, the at least one parameter being a parameter that affects at least a type or an amount of information lost in encoding the signal, wherein the at least one parameter comprises two quantization factors comprising a first quantization factor corresponding to a coefficient of a horizontal spatial frequency transform and a second ...

Подробнее
24-10-2013 дата публикации

METHOD AND SYSTEM FOR SMOKE DETECTION USING NONLINEAR ANALYSIS OF VIDEO

Номер: US20130279803A1
Автор: Cetin Ahmet Enis
Принадлежит:

The present invention describes a method and a system for detection of fire and smoke using image and video analysis techniques to detect the presence of indicators of fire and smoke. The method and the system detects smoke by transforming plurality of images forming the video captured by a camera into Nonlinear Median filter Transform (NMT) domain, implementing an “L1”-norm based energy measure indicating the existence of smoke from the MMT domain data, detecting slowly decaying NMT coefficients, performing color analysis in low-resolution NMT sub-images, using a Markov model based decision engine to model the turbulent behavior of smoke, and fusing the above information to reach a final decision about the existence of smoke within the viewing range of camera. 1. A computer implemented method of determining the location and presence of smoke due to fire , the method comprising:transforming a plurality of video images into Nonlinear Median filter Transform (NMT) domain, the video images having been captured by a camera;implementing an “L1”-norm based energy measure indicating the existence of smoke from the NMT domain data;detecting slowly decaying NMT coefficients;performing color analysis in low-resolution NMT subimages;using a Markov model based decision engine to model the turbulent behavior of smoke; andfusing the above information to reach a final decision.2. The method of claim 1 , wherein the Nonlinear Median (NM) filter transforms of video image frames are computed without performing any multiplication operations.3. The method of claim 1 , wherein subimages of NM transformed video data are searched for high amplitude NMT coefficients that are slowly-disappearing compared to a reference background NMT image claim 1 , said slowly disappearing NMT coefficients indicating smoke activity.4. The method claim 1 , wherein subimages of transformed video data are searched for newly appearing regions having energy less than a reference background NMT image claim 1 , ...

Подробнее
31-10-2013 дата публикации

Method and Device for Detecting an Object in an Image

Номер: US20130287254A1
Принадлежит: STMICROELECTRONICS (GRENOBLE 2) SAS

A method for detecting at least one object in an image including a pixel array, by means of an image processing device, including searching out the silhouette of the object in the image only if pixels of the image are at the minimum or maximum level. 1. A method for detecting an object in an image comprising a pixel array using an image processing device , the method comprising:determining whether pixels of the image are at a minimum or maximum level; andsearching out a silhouette of the object in the image only if pixels of the image are at the minimum or maximum level.2. The method of claim 1 , comprising successive steps of:searching out the object in the image; andsearching out the silhouette of the object in the image if the object has not been found in the image when searching out the object in the image.3. The method of claim 2 , wherein searching out the object comprises providing a first score and searching out the silhouette comprises providing a second score and wherein the presence of the object in the image is determined based on the first and second scores.4. The method of claim 1 , further comprising acquiring the image.5. The method of claim 4 , wherein determining whether the pixels of the image are at the minimum or maximum level comprises:acquiring at least one additional image at an exposure or under a lighting different from those of the image; anddetermining whether the pixels of the image are at the minimum or maximum level based on an analysis of the image and of the additional image.60. The method of claim 4 , wherein the image is acquired with a first exposure time t and wherein determining whether the pixels of the image are at the minimum or maximum level comprises:{'b': 1', '0, 'acquiring an additional image at a second exposure time t different from first exposure time t;'}{'b': '0', 'determining a first mean level G of the pixels of said image;'}{'b': '1', 'determining a second mean level G of the pixels of the additional image; and ...

Подробнее
14-11-2013 дата публикации

Image Recognition of Content

Номер: US20130301916A1
Принадлежит:

Techniques are described to employ image recognition techniques to content. In an implementation, one or more images are identified in content using a signature derived from the one or more images. Metadata associated with the content is then supplemented based on the identified one or more images. 1. A method implemented by a computing device , the method comprising:identifying an image that includes material determined to be potentially harmful to a child, the image included in one of a plurality of frames that form a segment of content;supplementing metadata associated with the content based on the identified image; andnavigating through the content using the supplemented metadata.2. A method as described in claim 1 , wherein supplementing the metadata associated with the content based on the identified image includes marking one or more frames of the content that include the identified image with metadata.3. A method as described in claim 1 , further comprising associating the supplemented metadata with one or more frames of the content that include the identified image.4. A method as described in claim 1 , further comprising blocking output of each of the plurality of frames that form the segment of content based on the identification.5. A method as described in claim 1 , wherein the content includes a plurality of segments.6. A method as described in claim 5 , further comprising supplementing the metadata of each segment of the plurality of segments that includes the identified image.7. A method as described in claim 1 , the material not including a body part.8. One or more computer readable storage devices comprising instructions stored thereon that claim 1 , responsive to execution by a computing device claim 1 , causes the computing device to perform operations comprising:identifying an image that includes material determined to be potentially harmful to a child, the image included in two or more of a plurality of segments of content; andnavigating through ...

Подробнее
21-11-2013 дата публикации

VIDEO PROCESSING APPARATUS AND METHOD FOR MANAGING TRACKING OBJECT

Номер: US20130307974A1
Автор: KAWANO Atsushi
Принадлежит: CANON KABUSHIKI KAISHA

A video processing apparatus includes a first detection unit configured to detect that a tracking target moving in a video has split into a plurality of objects, and a determination unit configured to, when the first detection unit detects that the tracking target has split into the plurality of objects, determine a number of objects included in the tracking target before splitting of the tracking target based on a number of the plurality of objects after splitting of the tracking target. 1. A video processing apparatus comprising:a first detection unit configured to detect that a tracking target moving in a video has split into a plurality of objects; anda determination unit configured to, when the first detection unit detects that the tracking target has split into the plurality of objects, determine a number of objects included in the tracking target before splitting of the tracking target based on a number of the plurality of objects after splitting of the tracking target.2. A video processing apparatus comprising:a first detection unit configured to detect that a plurality of objects moving in a video has merged; anda determination unit configured to, when the first detection unit detects that the plurality of objects has merged, determine a number of objects included in a tracking target, into which the plurality of objects has merged, based on a number of the plurality of objects before merging thereof.3. The video processing apparatus according to claim 1 , further comprising a management unit configured to manage a trajectory of the tracking target claim 1 ,wherein, when the first detection unit detects that the tracking target has split into the plurality of objects, the management unit is configured to manage the trajectory of the tracking target before splitting of the tracking target as trajectories of the plurality of objects after splitting of the tracking target.4. The video processing apparatus according to claim 1 , further comprising an ...

Подробнее
21-11-2013 дата публикации

CAPTURED IMAGE RECOGNITION DEVICE, CAPTURED IMAGE RECOGNITION SYSTEM, AND CAPTURED IMAGE RECOGNITION METHOD

Номер: US20130308825A1
Автор: Yamazaki Ryuji
Принадлежит: Panasonic Corporation

Provided is a captured image recognition device that enables the performance of an image recognition function to be sufficiently evinced. A field-of-view splitting estimation unit () estimates the splitting of the field of view of a camera unit () using a captured image (S). On the basis of the estimated splitting of the field of view, a candidate application selection unit () selects, from among a plurality of image recognition applications, candidates for an image recognition application that is favorable or able to execute processing with respect to a current captured image. An image recognition processing unit () executes the image recognition application selected by a user from among the candidate applications. As a result, the performance of an image recognition function can be sufficiently evinced as a result of it being possible to execute an image recognition application that is suitable to the current captured image. 110-. (canceled)11. A captured-image recognition apparatus comprising:a visual-field-ratio estimation section that estimates a visual field ratio of a target in a captured image using at least two kinds of detecting sections that detect different kinds of targets;a candidate application selection section that selects at least one candidate of an image recognition application corresponding to the estimated visual field ratio, from among a plurality of image recognition applications that recognize different kinds of targets; andan image recognition processing section that executes an image recognition application selected from among the candidate applications.12. The captured-image recognition apparatus according to claim 11 , whereinthe visual-field-ratio estimation section extracts a target in the captured image by the detecting sections and estimates the visual field ratio based on a visual field angle of a camera and a positional relationship between the target and the camera.13. The captured-image recognition apparatus according to claim 11 ...

Подробнее
21-11-2013 дата публикации

STILL IMAGE EXTRACTION APPARATUS

Номер: US20130308829A1
Принадлежит: ADC TECHNOLOGY INC.

The present invention is a still image extraction apparatus for extracting a specific frame as a still image from a plurality of frames. The still image extraction apparatus includes: an extraction condition registration device, an extraction determination device, and an extraction device. The extraction condition registration device registers an extraction condition, which is specified by a user of the still image extraction apparatus, for extracting the still image in an extraction condition recording unit. The extraction determination device determines in a frame-by-frame manner whether or not the plurality of frames satisfy the extraction condition registered in the extraction condition recording unit. The extraction device extracts, as a still image, a frame that has been determined to satisfy the extraction condition. 1. A still image extraction apparatus for extracting a specific frame as a still image from a plurality of frames , the apparatus comprising:an extraction condition registration device configured to register an extraction condition, which is specified by a user of the still image extraction apparatus, for extracting the still image in an extraction condition recording unit;an extraction determination device configured to determine in a frame-by-frame manner whether or not the plurality of frames satisfy the extraction condition registered in the extraction condition recording unit; andan extraction device configured to extract, as a still image, a frame that has been determined to satisfy the extraction condition.2. The still image extraction apparatus according to claim 1 , further comprising:an exclusion condition registration device configured to register an exclusion condition, which is specified by the user of the still image extraction apparatus, for not extracting a still image in an exclusion condition recording unit;an exclusion determination device configured to determine in a frame-by-frame manner whether or not the plurality of frames ...

Подробнее
28-11-2013 дата публикации

Stationary target detection by exploiting changes in background model

Номер: US20130315444A1
Принадлежит: Objectvideo Inc

A computer-implemented method for processing one or more video frames may include obtaining one or more video frames; generating one or more blobs using the one or more video frames; classifying the one or more blobs to produce one or more classified blobs, wherein the one or more classified blobs include one or more of a stationary target, a moving target, a target insertion, a target removal, or a local change; and constructing a list of detected targets based on the one or more classified blobs.

Подробнее
28-11-2013 дата публикации

Projecting location based elements over a heads up display

Номер: US20130315446A1
Автор: Jacob BEN TZVI
Принадлежит: Mishor 3d Ltd

A method including the following steps is provided: generating a three dimensional (3D) model of a scene within a specified radius from a vehicle, based on a source of digital mapping of the scene; associating a position of at least one selected LAE contained within the scene, with a respective position in the 3D model; superimposing the projecting onto a specified position on a transparent screen facing a viewer and associated with the vehicle, at least one graphic indicator associated with the at least one LAE, wherein the specified position is calculated based on: the respective position of the LAE in the 3D model, the screen's geometrical and optical properties, the viewer's viewing angle, the viewer's distance from the screen, the vehicle's position and angle within the scene, such that the viewer, the graphic indicator, and the LAE are substantially on a common line.

Подробнее
05-12-2013 дата публикации

OBJECT DETECTING DEVICE AND OBJECT DETECTING METHOD

Номер: US20130321624A1
Принадлежит:

An object detecting device includes an image acquiring unit which acquires an image from a camera, a scanning interval calculating unit which calculates a scanning interval when a scanning window is scanned on the image based on a size on the image of a detection object that is detected by the detecting window, a scanning unit which scans on the image using the scanning interval that is calculated by the scanning interval calculating unit, and a detecting unit which determines whether the detection object is present within the scanned detecting window. 1. An object detecting device comprising:an image acquiring unit configured to acquire an image from a camera;a scanning interval calculating unit configured to calculate a scanning interval when a detecting window is scanned on the image based on a size of a detection object on the image detected by the detecting window;a scanning unit configured to scan on the image using the scanning interval calculated by the scanning interval calculating unit; anda detecting unit configured to determine whether the detection object is present within the scanned detecting window.2. The object detecting device according to claim 1 , wherein the scanning interval is smaller than the size of the detection object on the image in a scanning direction of the detecting window.3. The object detecting device according to claim 1 , wherein the scanning interval calculating unit includes:a position estimating unit configured to estimate a piece of information of the detection object on the image; andwherein the scanning interval calculating unit calculates the size of the detection object on the image based on a camera parameter of the camera, the size of the detection object, and the piece of position information, and calculates the scanning interval based on the calculated size on the image.4. The object detecting device according to claim 2 , wherein the scanning interval calculating unit includes:a position estimating unit configured to ...

Подробнее
05-12-2013 дата публикации

SURVEILLANCE INCLUDING A MODIFIED VIDEO DATA STREAM

Номер: US20130322687A1

Methods provide surveillance, including a modified video data stream, with computer readable program code, when read by a processor, that is configured for receiving at an image processor a first video data stream and a second video data stream. Each of the first and second video data streams may include a target object having an assigned tracking position tag. The methods may further include extracting a first facial image of the target object from the first video data stream, determining a target object location in the second video data stream based at least in part on the tracking position tag and generating a modified video data stream including the first facial image superimposed on or adjacent to the target object location in the second video data stream. 1. A method comprising:receiving at an image processor a first video data stream and a second video data stream, each of the first and second video data streams including a target object having an assigned tracking position tag;extracting a first facial image of the target object from the first video data stream;determining a target object location in the second video data stream based at least in part on the tracking position tag; andgenerating a modified video data stream including the first facial image superimposed on or adjacent to the target object location in the second video data stream.2. The method of claim 1 , further comprising:determining that the second video data stream includes an inferior facial view of the target object based at least in part on a face detection program, the inferior facial view being less than a predetermined facial view threshold.3. The method of claim 1 , further comprising:assigning the first facial image to the tracking position tag.4. The method of claim 1 , further comprising:displaying the modified video data stream on an image display.5. The method of claim 1 , further comprising:determining the tracking position tag based at least in part on utilization of an ...

Подробнее
05-12-2013 дата публикации

Periodic stationary object detection system and periodic stationary object detection method

Номер: US20130322688A1
Принадлежит: Nissan Motor Co Ltd

A periodic stationary object detection system extracts a feature point of a three-dimensional object from image data on a predetermined region of a bird's eye view image for each of multiple sub regions included in the predetermined region, calculates waveform data corresponding to a distribution of the feature points in the predetermined region on the bird's eye view image, and judges whether or not the three-dimensional object having the extracted feature point is a periodic stationary object candidate on the basis of whether or not peak information of the waveform data is equal to or larger than a predetermined threshold value.

Подробнее
05-12-2013 дата публикации

Intelligent Logo and Item Detection in Video

Номер: US20130322689A1
Автор: Chris Carmichael
Принадлежит: Ubiquity Broadcasting Corp

Techniques to follow objects in a video. An object detector detects the object, and an object tracker follows that object even when the detectable part cannot be seen. The object can be tagged in its display. The object can be individual team members in a video showing sports. A color filter that is based on colors of a uniform of a team of the team member can be used to restrict an area of said automated object detection.

Подробнее
05-12-2013 дата публикации

TARGET RECOGNITION SYSTEM AND TARGET RECOGNITION METHOD EXECUTED BY THE TARGET RECOGNITION SYSTEM

Номер: US20130322692A1
Автор: GUAN Haike
Принадлежит: RICOH COMPANY, LTD.

A target recognition system and a target recognition method to recognize one or more recognition targets, operatively connected to an imaging device to capture an image of an area ahead of the target recognition system, each of which includes a recognition area detector to detect multiple recognition areas from the captured image; a recognition weighting unit to set recognition weight indicating existence probability of images of the recognition targets to the respective recognition areas detected by the recognition area detector; and a target recognition processor to recognize the one or more recognition targets in a specified recognition area based on the recognition weight set in the respective recognition area. 1. A target recognition system to recognize one or more recognition targets , operatively connected to an imaging device to capture an image of an area ahead of the target recognition system , comprising:a recognition area detector to detect multiple recognition areas from the captured image;a recognition weighting unit to weight the probability of images of the recognition targets being present in each of the respective recognition areas detected by the recognition area detector; anda target recognition processor to recognize the one or more recognition targets in a specified recognition area based on the recognition weighting given to the respective recognition areas.2. The target recognition system according to claim 1 , wherein the imaging device has a stereo imaging device to capture a stereo image including two images claim 1 ,the recognition system further comprising a parallax calculator to calculate parallax of the captured image from the two images in the stereo image,wherein the recognition area detector detects multiple recognition areas from a luminance image of one of the images in the stereo image or a parallax image having pixel values corresponding to the parallax calculated by the parallax calculator.3. The target recognition system ...

Подробнее
12-12-2013 дата публикации

MONITORING REMOVAL AND REPLACEMENT OF TOOLS WITHIN AN INVENTORY CONTROL SYSTEM

Номер: US20130328661A1
Принадлежит:

An inventory control system is described that includes a tool storage device including a drawer or a tray providing a pallet, wherein the pallet includes storage locations for objects; a sensing device configured to form an image of the storage locations; and a data processor configured to determine presence or absence of the pallet and presence or absence of objects within the storage locations of the pallet using the information from the image. 1. An inventory control system comprising:a tool storage device including a drawer or a tray providing a pallet, wherein the pallet includes storage locations for objects;a sensing device configured to form an image of the storage locations; anda data processor configured to determine presence or absence of the pallet and presence or absence of objects within the storage locations of the pallet using the information from the image.2. The inventory control system of claim 1 , wherein the pallet is a removable section within the drawer or tray and is configured to group objects by housing them within the storage locations.3. The inventory control system of claim 1 , wherein upon determining the pallet is absent from the drawer or the tray claim 1 , the processor issues the pallet along with the objects stored within the pallet as being checked out.4. The inventory control system of claim 3 , wherein: identify a return of the pallet to the drawer or the tray, and', 'upon determining the pallet has been returned to the drawer or the tray, the processor determines the presence or absence of the objects within the storage sections of the pallet., 'the processor is configured to5. The inventory control system of claim 1 , further comprising a display configured to display information about the pallet and the objects stored within the pallet.6. The inventory control system of claim 5 , wherein the processor is configured to display on the display an image of the pallet and the objects within the pallet.7. An inventory control ...

Подробнее
12-12-2013 дата публикации

METHOD FOR DETERMINING A BODY PARAMETER OF A PERSON

Номер: US20130329960A1
Принадлежит:

A method is described for determining a body parameter of a person outside a vehicle. The method may include capturing a first set of data of the person by a first data capturing device of the vehicle, the captured first set of data representative of a first body posture of the person, capturing a second set of data of the person by a second data capturing device of the vehicle, the captured second set of data representative of a second body posture of the person different from the first body posture, and using the first and second sets of data as input for estimation of the body parameter of the person. Use of a data capturing device of a vehicle is also described, and optionally a distance measurement system of the vehicle, for determining a body parameter of a person according to the method. 1. A method for determining a body parameter of a person outside a vehicle , the method comprising:a) capturing a first set of data of said person by a first data capturing device of said vehicle, said captured first set of data being representative of a first body posture of said person;b) capturing a second set of data of said person by a second data capturing device of said vehicle, said captured second set of data being representative of a second body posture of said person, said second body posture being different from said first body posture; andc) using said first and second sets of data as input for estimation of said body parameter of said person.2. The method according to claim 1 , wherein said body parameter comprises one or more of an arm length claim 1 , a leg length claim 1 , a body height claim 1 , an eye position and a torso length.3. The method according to claim 1 , wherein said method is performed as an iteration process by repeating (b) and optionally (c).4. The method according to claim 1 , wherein said estimation made in step c) comprises assumption of at least one of the following rules:a left-hand side and a right-hand side of the body of said person ...

Подробнее
19-12-2013 дата публикации

METHODS AND SYSTEMS FOR IDENTIFYING, MARKING, AND INVENTORYING LARGE QUANTITIES OF UNIQUE SURGICAL INSTRUMENTS

Номер: US20130336554A1
Принадлежит: PREZIO HEALTH

An apparatus for automatically identifying a surgical instrument, the apparatus comprising a capture module, an attribute database, a comparison module, and an exporting module, the capture module comprising hardware operable to capture multiple attributes of the surgical instrument, the attribute database comprising multiple stored attributes of a plurality of reference surgical instruments, the comparison module programmed to generate a comparison score for the surgical instrument, wherein the comparison module is programmed to generate the comparison score by receiving multiple attributes captured by the capture module and comparing it to the multiple attributes stored in the attribute database, and the exporting module configured to receive and export the comparison score generated by the comparison module. 1. An apparatus for automatically identifying a surgical instrument , the apparatus comprising:a capture module comprising hardware to capture multiple attributes of the surgical instrument, wherein the capture module comprises an image capture device including at least a first camera and a second camera, wherein the first camera captures a first image of the surgical instrument, wherein the second camera captures a second image of the surgical instrument, and wherein the first and second cameras are relatively positioned to capture respective first and second perspectives of the surgical instrument;an attribute database comprising multiple stored attributes of a plurality of reference surgical instruments;a comparison module programmed to generate a comparison score for the surgical instrument, wherein the comparison module is programmed to generate the comparison score by receiving multiple attributes captured by the capture module and comparing it to the multiple attributes stored in the attribute database, and wherein the comparison module is programmed to generate the comparison score by at least comparing the first and second images of the surgical ...

Подробнее
26-12-2013 дата публикации

VIDEO PROCESSING APPARATUS AND VIDEO PROCESSING METHOD

Номер: US20130343604A1
Автор: Adachi Keiji
Принадлежит: CANON KABUSHIKI KAISHA

A video processing apparatus tracks an object in a video and performs detection processing for detecting that an object in the video is a specific object such that a number of times the detection processing is performed within a predetermined period on a tracking object not detected to be the specific object is more than a number of times the detection processing is performed within the predetermined period on a tracking object detected to be the specific object. 1. A video processing apparatus comprising:a tracking unit configured to track an object in a video anda first detection unit configured to perform detection processing for detecting that an object in the video is a specific object such that a number of times the first detection unit performs the detection processing within a predetermined period on a tracking object that the first detection unit did not detect to be the specific object is more than a number of times the first detection unit performs the detection processing within the predetermined period on a tracking object that the first detection unit detected to be the specific object.2. The video processing apparatus according to claim 1 , wherein:the first detection unit determines, in accordance with a number of times the first detection unit detected that a first tracking object is the specific object in a first period, a number of times the detection processing is performed on the first tracking object in a second period after the first period.3. The video processing apparatus according to claim 1 , further comprisinga second detection unit configured to detect an object in the video whereinthe tracking unit tracks an object that the second detection unit detected andthe first detection unit determines, in accordance with a number of times the first detection unit detected that a second tracking object is the specific object after the second tracking unit detected the second tracking object, a number of times the detection processing is performed ...

Подробнее
26-12-2013 дата публикации

CAMERA-BASED METHOD FOR DETERMINING DISTANCE IN THE CASE OF A VEHICLE AT STANDSTILL

Номер: US20130343613A1
Принадлежит:

Methods for distance determination, as are used, for example, in parking assistance systems are described. In a vehicle at standstill, the method involves detecting a first predefined event that occurs in connection with a pitching motion of the vehicle, and based on the detection of the first event, activating the camera in order to record a first and a second image of the vehicular environment and include a time reference to the pitching motion. The method also includes processing the first and the second image in order to determine a distance to the object from a displacement of the object in the field of view of the camera that has taken place between the points in time of the recording of the first and second image in response to the pitching motion. 110. . (canceled)11. A camera-based method for determining a distance to an object in a vehicular environment in the case of a vehicle at a standstill , comprising:detecting a first predefined event that occurs in connection with a pitching motion of the vehicle, the first event including at least one of the following events: unlocking of a vehicle door, opening of a vehicle door, change in a weight loading upon a vehicle seat, and lowering of the vehicle;activating a camera in response to the detection of the first event in order to record a first image and a second image of the vehicular environment and include a time reference to the pitching motion;processing the first image and the second image in order to determine a distance to the object from a displacement of the object in a field of view of the camera that has taken place between a point in time of recording the first image and a point in time of recording the second image in response to the pitching motion; anddetermining an absolute displacement of the camera in response to the pitching motion, and a specific absolute displacement in the determination of the distance to the object.12. The method as recited in claim 11 , wherein:the first image is ...

Подробнее
02-01-2014 дата публикации

WHITE TURBID STATE DIAGNOSTIC APPARATUS

Номер: US20140002654A1
Принадлежит:

A white turbid state diagnostic apparatus has an imaging part installed on a vehicle and configured to convert a light signal from a periphery of the vehicle into an image signal, a region detection part configured to detect a region from the image signal, the region being constituted by pixels having brightness values over a predetermined brightness and being in a substantially circular shape having a predetermined area or more, a brightness gradient calculation part configured to calculate a brightness gradient on a line which is directed from a predetermined position in a predetermined direction based on brightness values of pixels on the line in the region, and a white turbid level calculation part configured to calculate a white turbid level of the lens based on the brightness gradient. 1. A white turbid state diagnostic apparatus , comprisingan imaging part installed on a vehicle and configured to observe a periphery of a vehicle via a lens, the imaging part having a photoelectric conversion section configured to convert a light signal of the observed periphery of the vehicle into an image signal;a region detection part configured to detect a region from the image signal, the region being constituted by pixels having brightness values over a predetermined brightness and being in a substantially circular shape having a predetermined area or more;a brightness gradient calculation part configured to calculate a brightness gradient on a line which is directed from a predetermined position in a predetermined direction based on brightness values of pixels on the line in the region; anda white turbid level calculation part configured to calculate a white turbid level of the lens based on the brightness gradient.2. The lens white turbid status diagnostic apparatus according to claim 1 , wherein the imaging part has a gain adjustment section configured to adjust a gain for converting the light signal into the image signal according to brightness of the periphery of the ...

Подробнее
02-01-2014 дата публикации

BIRDS-EYE-VIEW IMAGE GENERATION DEVICE, AND BIRDS-EYE-VIEW IMAGE GENERATION METHOD

Номер: US20140002660A1
Принадлежит: Panasonic Corporation

A birds-eye-view image generation device includes a captured image acquisition unit, an image conversion unit, a birds-eye-view image combining unit, and a joint setting unit. The joint setting unit sets any position of a rim of a vehicle image corresponding to a vehicle included in the birds-eye-view image as an end point in an overlapping imaging range in two birds-eye-view images corresponding to two imaging devices of which imaging ranges overlap each other, and sets a line which extends in any direction on an opposite side to the vehicle image from the end point between two radial directions directed to the end point from the two imaging devices, as a joint which joins two birds-eye-view images which are combined. 1. A birds-eye-view image generation device comprising:a captured image acquisition unit configured to acquire captured images which are respectively captured by a plurality of imaging devices mounted on a vehicle;an image conversion unit configured to convert the captured images acquired by the captured image acquisition unit into birds-eye-view images through a viewpoint conversion process;a birds-eye-view image combining unit configured to combine the plurality of birds-eye-view images converted by the image conversion unit; anda joint setting unit configured to set any position of a rim of a vehicle image corresponding to a vehicle included in the birds-eye-view images as an end point in an overlapping imaging range in two birds-eye-view images corresponding to two imaging devices of which imaging ranges overlap each other, and configured to set a line which extends in any direction on an opposite side to the vehicle image from the end point and between two radial directions directed to the end point from the two imaging devices, as a joint which joins two birds-eye-view images which are combined by the birds-eye-view image combining unit.2. The birds-eye-view image generation device according to claim 1 ,wherein the joint setting unit is ...

Подробнее
02-01-2014 дата публикации

UNSUPERVISED LEARNING OF FEATURE ANOMALIES FOR A VIDEO SURVEILLANCE SYSTEM

Номер: US20140003710A1
Принадлежит:

Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. In one embodiment, e.g., a machine learning engine may include statistical engines for generating topological feature maps based on observations and a detection module for detecting feature anomalies. The statistical engines may include adaptive resonance theory (ART) networks which cluster observed position-feature characteristics. The statistical engines may further reinforce, decay, merge, and remove clusters. The detection module may calculate a rareness value relative to recurring observations and data in the ART networks. Further, the sensitivity of detection may be adjusted according to the relative importance of recently observed anomalies. 1. A computer-implemented method for analyzing a scene , the method comprising:receiving kinematic and feature data for an object in the scene;determining, via one or more processors, a position-feature vector from the received data, the position-feature vector representing a location and one or more feature values at the location;retrieving a feature map corresponding to the position-feature vector, wherein the feature map includes one or more position-feature clusters;determining a rareness value for the object based at least on the position feature vector and the feature map; andreporting the object as anomalous if the rareness value meets given criteria.2. The method of claim 1 , further comprising claim 1 , updating the feature map using the position-feature vector.3. The method of claim 1 , wherein the feature map includes one or more adaptive resonance theory (ART) network clusters claim 1 , and wherein the rareness value is determined based on at least distance of the position-feature vector to a closest cluster and on statistical relevance of clusters less than a threshold distance from the position-feature vector.4. The method of claim 3 , wherein the distance to the closest cluster is a pseudo- ...

Подробнее
09-01-2014 дата публикации

PROCESSING CONTAINER IMAGES AND IDENTIFIERS USING OPTICAL CHARACTER RECOGNITION AND GEOLOCATION

Номер: US20140009612A1
Автор: King Henry S.
Принадлежит: Paceco Corp.

Embodiments include a system configured to process location information for objects in a site comprising an imaging device configured to take a picture of an object, the picture containing a unique identifier of the object; a global positioning system (GPS) component associated with the imaging device and configured to tag the image of the object with GPS location information of the object to generate a tagged image; a communications interface configured to transmit the tagged image to a server computer remote from the imaging device over an Internet Protocol (IP) network; and a processor of the server configured to perform Optical Character Recognition (OCR) on the picture and to create an indicator code corresponding to the identifier of the object, wherein the processor is further configured to create a processed result containing the indicator code and the location to locate the object within the site. 1. An apparatus , comprising:an imaging device configured to take a picture of an object, the picture containing an alphanumeric identifier of the object;a global positioning system (GPS) component associated with the imaging device and configured to tag the image of the object with GPS location information of the object to generate a tagged image;a communications interface configured to transmit the tagged image to a server computer remote from the imaging device over an Internet Protocol (IP) network; anda processor of the server configured to perform Optical Character Recognition (OCR) on the picture and to create an indicator code corresponding to the identifier of the object, wherein the processor is further configured to create a processed result containing the indicator code and the location to locate the object within a site.2. The apparatus of wherein the object comprises a container of at least twenty feet long and a chassis configured to carry at least one containers claim 1 , and wherein the tagged image comprises metadata including the identifier and ...

Подробнее
09-01-2014 дата публикации

APPARATUS AND METHOD FOR DETECTING A THREE DIMENSIONAL OBJECT USING AN IMAGE AROUND A VEHICLE

Номер: US20140009614A1
Принадлежит: HYUNDAI MOTOR COMPANY

An apparatus for detecting a three dimensional object using an image around a vehicle includes a plurality of imaging devices disposed on a front, a rear, a left side, and a right side of the vehicle; a processor configured to: collect an image of the front, the rear, the left side, and the right side of the vehicle through a virtual imaging device; generate a composite image by compounding a plurality of top view images of the image; extract a boundary pattern of the plurality of top view images in each boundary area; compare the boundary pattern of the plurality of top view images to analyze a correlation between a plurality of neighboring images in each boundary area; and detect a three dimensional object according to the correlation between the plurality of neighboring images in each boundary area. 1. An apparatus for detecting a three dimensional object using an image around a vehicle , the apparatus comprising:a plurality of imaging devices disposed on a front, a rear, a left side, and a right side of the vehicle; collect an image of the front, the rear, the left side, and the right side of the vehicle through a virtual imaging device generated using a mathematic modeling of each imaging device;', 'generate a composite image by compounding a plurality of top view images of the collected image from the front, the rear, the left side, and the right side of the vehicle;', 'analyze a boundary area between the plurality of top view images from the composite image to extract a boundary pattern of the plurality of top view images in each boundary area;', 'compare the boundary pattern of the plurality of top view images to analyze a correlation between a plurality of neighboring images in each boundary area; and', 'detect a three dimensional object disposed in each boundary area according to the correlation between the plurality of neighboring images in each boundary area., 'a processor configured to2. The apparatus of claim 1 , wherein the boundary pattern is ...

Подробнее
09-01-2014 дата публикации

Lens-attached matter detector, lens-attached matter detection method, and vehicle system

Номер: US20140010408A1
Принадлежит: Clarion Co Ltd

A lens-attached matter detector includes an edge extractor configured to create an edge image based on an input image, divide the edge image into a plurality of areas including a plurality of pixels, and extract an area whose edge intensity is a threshold range as an attention area, a brightness distribution extractor configured to obtain a brightness value of the attention area and a brightness value of a circumference area, a brightness change extractor configured to obtain the brightness value of the attention area and the brightness value of the circumference area for a predetermined time interval, and obtain a time series variation in the brightness value of the attention area based on the brightness value of the attention area, and an attached matter determiner configured to determine the presence or absence of attached matter based on the time series variation in the brightness value of the attention area.

Подробнее
09-01-2014 дата публикации

Video Image Capture and Identification of Vehicles

Номер: US20140010412A1
Автор: Price Adam James
Принадлежит: Life on Show Limited

A method and apparatus for capturing, sorting and subsequently viewing an orbital image record of a vehicle, the method comprising the steps, in any suitable order, of using imaging means () to capture an orbital moving image record of the vehicle in various orientations relative to the imaging means, storing and sorting the captured orbital image record for each vehicle by reference to a unique identifier for that vehicle externally visible to the imaging means during image capture, such as the vehicle licence or registration number, to provide a continuous image record unique to the vehicle, and thereafter selectively displaying () orbital images of the vehicle. 1111512. A method for capturing , sorting and subsequently viewing an orbital image record of a vehicle , the method comprising the steps , in any suitable order , of using imaging means ( ,) to capture an orbital moving image record of the vehicle in various orientations relative to the imaging means , storing and sorting the captured orbital image record for each vehicle by reference to a unique identifier for that vehicle externally visible to the imaging means during image capture , such as the vehicle licence or registration number , to provide a continuous image record unique to the vehicle , and thereafter selectively displaying () orbital images of the vehicle.2. A method according to wherein the step of sorting the captured orbital image record for each vehicle by reference to a unique identifier for that vehicle involves the step of including validating algorithms by which multiple readings of the vehicle's unique identifier are extracted at varying angular positions around the vehicle claim 1 , leading to a plurality of results and increasing confidence levels as to their accuracy being achieved.3. A method according to wherein the confidence level in the number recognition is further increased by its associated algorithm rejecting or downgrading numbers perceived to be below a given size.4. A ...

Подробнее
16-01-2014 дата публикации

ACTIVE PRESENCE DETECTION WITH DEPTH SENSING

Номер: US20140015930A1
Автор: Sengupta Kuntal
Принадлежит:

In vision-based authentication platforms for secure resources such as computer systems, false positives and/or false negatives in the detection of walk-away events are reduced or eliminated by incorporating depth information into tracking authenticated system operators. 1. A computer-implemented method for monitoring an operator's use of a secure system , the method comprising:(a) acquiring images with a depth-sensing camera system co-located with an operator terminal of the secure system;(b) analyzing at least some of the images to determine whether at least one face is present within a three-dimensional detection zone including a depth boundary relative to the operator terminal, and, if so, associating one of the at least one face with an operator; and(c) following association of a detected face with the operator, tracking the operator between successive ones of the images based, at least in part, on measured depth information associated with the operator to detect when the operator leaves the detection zone.2. The method of claim 1 , wherein tracking the operator comprises using the measured depth information to discriminate between the operator and background objects.3. The method of claim 1 , wherein step (b) comprises detecting faces in the images and thereafter computationally determining which claim 1 , if any claim 1 , of the detected faces are present within the detection zone.4. The method of claim 1 , wherein step (b) comprises detecting faces only within portions of the image corresponding to the detection zone.5. The method of claim 1 , wherein step (b) comprises identifying claim 1 , among a plurality of faces present within the detection zone claim 1 , the face closest to the secure system and computationally associating that face with the operator.6. The method of claim 1 , wherein step (b) comprises discriminating between faces and two-dimensional images thereof.7. The method of claim 1 , wherein step (c) comprises tracking a collection of ...

Подробнее
23-01-2014 дата публикации

SPECIFYING SEARCH CRITERIA FOR SEARCHING VIDEO DATA

Номер: US20140022387A1
Принадлежит: 3VR Security, Inc.

A method and apparatus is described for specifying regions of interest within a two-dimensional view of visual information that comprises a series of frames. Visual changes that occur in the view are stored. A user enters search criteria that specify at least one first region of interest within the view and a visual change. A visual change may include a change in pixel values or a detection of motion of one or more objects within the view. The first search criteria are compared against the stored visual changes to identify a sequence of frames in which the specified visual change occurred within the first region of interest. The search criteria may specify multiple regions of interest, each with one or more types of visual changes. If a motion is specified, then a direction, speed, and behavior of a moving object may also be specified. 1. A machine-implemented method , comprising:storing change information about visual changes that occur in a two-dimensional view of visual information;causing a user interface to be displayed that allows a user to specify search criteria;wherein the user interface presents controls for selecting between a plurality of behavior options;wherein each behavior option of the plurality of behavior options corresponds to a pre-defined behavior;receiving, from the user through the user interface, first search criteria that includes selection of a particular behavior option of the plurality of behavior options;comparing the first search criteria against said change information to identify a sequence of frames, within the visual information, in which an object exhibits the pre-defined behavior that corresponds to the particular behavior option;wherein the method is performed by one or more computing devices.2. The method of claim 1 , wherein the plurality of behavior options include one or more of loitering claim 1 , running claim 1 , snooping claim 1 , or swerving.3. The method of claim 1 , wherein:the user interface presents second controls ...

Подробнее
23-01-2014 дата публикации

Vehicle wheel alignment system and methodology

Номер: US20140023238A1
Автор: Steven W. Rogers
Принадлежит: Snap On Inc

A hybrid wheel alignment system and methodology use passive targets for a first pair of wheels (e.g. front wheels) and active sensing heads for another pair of wheels (e.g. rear wheels). The active sensing heads combine image sensors for capturing images of the targets with at least one spatial relationship sensor for sensing a relationship between the active sensing heads. One or both of the active sensing heads may include inclinometers or the like, for sensing one or more tilt angles of the respective sensing head. Data from the active sensing heads may be sent to a host computer for processing to derive one or more vehicle measurements, for example, for measurement of parameters useful in wheel alignment applications.

Подробнее
30-01-2014 дата публикации

Real Time Threat Detection System

Номер: US20140028457A1
Принадлежит: THERMAL MATRIX USA, INC.

A real-time threat detection system incorporating a plurality of sensors adapted to detect radiation across the majority of the electromagnetic spectrum. The system also includes an aided or automatic target recognition module which compares the data from the sensors against known radiation signatures and issues an alert when an anomalous signature is detected. The system further includes an operator station which displays sensor information allowing the operator to intervene. The sensors detect radiation which is normally emitted by persons or other bodies and display areas to the operator where normal emissions are blocked. 1. A threat detection system , comprising:a mobile operator station having a plurality of heterogeneous integrated passive sensors, the sensors adapted to detect radiation in a plurality of bands of the electromagnetic spectrum at variable distances, the plurality of sensors selected from a group consisting of a terahertz sensor, a millimeter wave sensor, and an infrared sensor;a camera; andan aided target recognition module in communication with the camera and the plurality of sensors, wherein the aided target recognition module is configured to integrate imagery from the plurality of sensors into a homogeneous image.2. The threat detection system of claim 1 , further comprising:the plurality of passive sensors including a first sensor being the terahertz sensor, a second sensor being the millimeter wave sensor, and a third sensor being the infrared sensor; andthe plurality of bands including a first band being a terahertz band, a second band being a millimeter wave band, and a third band being an infrared band;wherein the terahertz sensor configured to detect the terahertz band, the millimeter wave sensor configured to detect the millimeter wave band, the infrared sensor configured to detect the infrared band, and the camera configured to detect a visible band image.3. The threat detection system of claim 2 , wherein the aided target ...

Подробнее
30-01-2014 дата публикации

DRIVING ASSISTANCE SYSTEM AND RAINDROP DETECTION METHOD THEREOF

Номер: US20140028849A1
Принадлежит:

A driving assistance system is installed in a moving object and includes; image-capturing means to capture a surrounding image I including a portion of the moving object, first edge line storing means to store a first edge line E detected from a first surrounding image I captured in normal conditions by the image-capturing means , and calculating means to calculate a matching degree between the first edge line E and a second edge line E detected from a second surrounding image I currently captured by the image-capturing means . A raindrop judging means judges that a raindrop is attached to the lens unit of the image-capturing means in response to a decrease in the matching degree between the first edge line E and the second edge line E 113.-. (canceled)14. A driving assistance system to provide various kinds of information to a driver of a moving object from an image-capturing result of surroundings of the moving object , comprising:image-capturing unit installed on the moving object and configured to capture a surrounding image including a portion of the moving object;first edge line storing unit configured to store a first edge line detected from a first surrounding image captured in advance by the image-capturing unit;edge detecting unit configured to detect a second edge line from a second surrounding image currently captured by the image-capturing unit;calculating unit configured to calculate a matching degree between the first edge line stored in the first edge line storing unit and the second edge line detected by the edge detecting unit; andraindrop judging unit configured to judge that a raindrop is attached to a lens unit of the image-capturing unit, in response to a decrease in the matching degree between the first edge line and the second edge line, whereinthe calculating unit calculates a deviation degree between the first edge line stored in the first edge line storing unit and the second edge line detected by the edge detecting unit, andwhen the ...

Подробнее
30-01-2014 дата публикации

LIGHT EMITTING SOURCE DETECTION DEVICE, LIGHT BEAM CONTROL DEVICE AND PROGRAM FOR DETECTING LIGHT EMITTING SOURCE

Номер: US20140029791A1
Автор: Kato Kenji, Mori Raise
Принадлежит:

A processing section as a light emitting source detection device in a light beam control system changes irradiation parameters, one of an irradiation range and a luminance of a light beam of head lamps of an own vehicle. The light beam of the head lamps is irradiated toward a light object corresponding to a light source detected in captured image data. The processing section detects whether or not the luminance of the detected light source is changed after the change of the irradiation parameters, and sets a probability value of the detected light source to a value lower than a probability value of a light source when luminance is not changed even if the irradiation parameters are changed. When the probability value of the detected light source is not less than a predetermined threshold value, the processing section determines that the detected light source is a vehicle light source. 1. A light emitting source detection device configured to detect a light emitting source in captured image data , comprising:an irradiation parameter change section configured to change irradiation parameters to change an irradiation state of a light beam of a light source of an own vehicle toward a light source, which is present in front of the light emitting source detection device, when the light source is detected in captured image data;a luminance change detection section configured to detect whether or not a luminance of the light source detected in the captured image data is changed when the irradiation parameter is changed;a probability value setting section configured to decrease a probability value of the detected light source when the luminance of the detected light source is changed more than a probability value of the detected light source when the luminance of the detected light source is not changed; anda light emitting source detection section configured to judge that the detected light source is a light emitting source as a luminous object which emits light when the ...

Подробнее
30-01-2014 дата публикации

VEHICLE LIGHT SOURCE DETECTION DEVICE, LIGHT BEAM CONTROL DEVICE AND PROGRAM OF DETECTING VEHICLE LIGHT SOURCE

Номер: US20140029792A1
Автор: Kato Kenji, Mori Raise
Принадлежит:

A vehicle light source detection device in a light beam control system detects a position of a light source appeared and detected in captured image data The device calculates a gradient of a road on which the own vehicle is running. The vehicle light source detection device estimates a vanishing point in the captured image data on the basis of the detected gradient of the road. The device further increases a reliability value of the detected light source when the point of the detected light source more approaches the vanishing point. When the reliability value of the detected light source is not less than a predetermined reference value, the device determines that the detected light source is a head lamp of an oncoming vehicle, and adjusts an irradiation range of the light beam of the head lamps of the own vehicle to avoid the oncoming vehicle. 1. A vehicle light source detection device mounted to an own vehicle configured to detect a light source of a vehicle in captured image data , comprising:a light source appearance detection section configured to detect a presence of a light source detected in captured image data;a gradient calculation section configured to calculate a gradient of a road on which the own vehicle is running;a vanishing point estimation section configured to estimate a vanishing point in the captured image data on the basis of at least the detected gradient of the road;a reliability value calculation section configured to more increase a magnitude of a reliability value of the detected light source when a position of the detected light source more approaches to the vanishing point in the captured image data, and the reliability value of the detected light source indicating a possibility that the detected light source is a light source of a vehicle; anda light source detection section configured to determine that the detected light source is a light source of a vehicle when the reliability value of the detected light source is not less than a ...

Подробнее
30-01-2014 дата публикации

In-Video Product Annotation with Web Information Mining

Номер: US20140029801A1
Принадлежит: NATIONAL UNIVERSITY OF SINGAPORE

A system provides product annotation in a video to one or more users. The system receives a video from a user, where the video includes multiple video frames. The system extracts multiple key frames from the video and generates a visual representation of the key frame. The system compares the visual representation of the key frame with a plurality of product visual signatures, where each visual signature identifies a product. Based on the comparison of the visual representation of the key frame and a product visual signature, the system determines whether the key frame contains the product identified by the visual signature of the product. To generate the plurality of product visual signatures, the system collects multiple training images comprising multiple of expert product images obtained from an expert product repository, each of which is associated with multiple product images obtained from multiple web resources. 1. A computer method for providing product annotation in a video to one or more users , the method comprising:receiving a video for product annotation, the video comprising a plurality of video frames;extracting a plurality of key frames from the video frames; and generating a visual representation of the key frame;', 'comparing the visual representation with a plurality of product visual signatures; and', 'determining, based on the comparison, whether the key frame contains a product identified by one of the product visual signatures., 'for each key frame2. The method of claim 1 , wherein extracting a plurality of key frames from the video comprises:extracting each of the plurality of key frames at a fixed point of the video.3. The method of claim 1 , wherein generating the visual signature of a key frame comprises:extracting a plurality of visual features from the key frame;grouping the plurality of visual features into a plurality of clusters; andgenerating multi-dimensional bag visual words histogram as the visual signature of the key frame.4. The ...

Подробнее
30-01-2014 дата публикации

METHOD AND APPARATUS FOR USE IN FORMING AN IMAGE

Номер: US20140029802A1
Принадлежит:

A method for use in forming an image of an object comprises setting a value of an attribute of an image of an object according to a measured reflectance of the object. The image of the object thus formed may be realistic and may closely resemble the actual real-world appearance of the object. Such a method may, in particular, though not exclusively, be useful for providing a realistic image of a road surface and any road markings thereon to assist with navigation. Setting a value of an attribute of the image of the object may comprise generating an initial image of the object and adjusting a value of an attribute of the initial image of the object according to the measured reflectance of the object to form an enhanced image of the object. A method for use in navigation comprises providing a navigation system with data associated with an image formed using such a method. An image formed using such a method and a map database containing such an image are also disclosed. 1. A method for use in forming an image of an object comprising:setting a value of an attribute of an image of an object according to a measured reflectance of the object.2. A method according to claim 1 , wherein the object comprises a surface for vehicular traffic.3. A method according to claim 1 , wherein the object comprises a road marking.4. A method according to wherein the object comprises a road-side surface or a road-side structure.5. A method according to claim 1 , wherein the attribute of the image of the object comprises at least one of a brightness claim 1 , lightness claim 1 , intensity claim 1 , grayscale intensity claim 1 , saturation claim 1 , contrast claim 1 , hue and colour of the image.6. A method according to claim 1 , comprising associating at least one of a pixel claim 1 , point and portion of the image of the object with a corresponding position on a surface of the object from which the reflectance is measured.7. A method according to claim 6 , comprising setting a value of an ...

Подробнее
06-02-2014 дата публикации

CARGO SENSING

Номер: US20140036072A1
Принадлежит:

Cargo presence detection devices, systems, and methods are described herein. One cargo presence detection system includes one or more sensors positioned in an interior space of a container, and arranged to collect background image data about at least a portion of the interior space of the container and updated image data about the portion of the interior space of the container and a detection component that receives the image data from the one or more sensors and identifies if one or more cargo items are present in the interior space of the container based on analysis of the background and updated image data. 1. A cargo presence detection system , comprising:one or more sensors positioned in an interior space of a container, and arranged to collect background image data about at least a portion of the interior space of the container and updated image data about the portion of the interior space of the container; anda detection component that receives the image data from the one or more sensors and identifies if one or more cargo items are present in the interior space of the container based on analysis of the background and updated image data.2. The cargo presence detection system of claim 1 , wherein the detection component compares the background data and updated data to identify differences and then analyzes the differences to determine whether the differences represent one or more cargo items.3. The cargo presence detection system of claim 1 , wherein at least one of the one or more sensors is an active infra-red or near infra-red three dimensional sensor.4. The cargo presence detection system of claim 1 , wherein the image data provided by the one or more sensors includes at least one of: depth information and three dimensional points.5. The cargo presence detection system of claim 1 , wherein at least one of the one or more sensors is movable within the interior of the container.6. The cargo presence detection system of claim 5 , wherein the system includes ...

Подробнее
06-02-2014 дата публикации

METHOD FOR EVALUATING A PLURALITY OF TIME-OFFSET PICTURES, DEVICE FOR EVALUATING PICTURES, AND MONITORING SYSTEM

Номер: US20140037141A1
Принадлежит: Hella KGaA Hueck & Co.

The invention relates to a method for evaluating a plurality of chronologically staggered images, said method comprising the following steps: 1. A method for evaluating a plurality of chronologically offset images , said method comprising the following steps:detecting a plurality of objects in a first image and storing each of the plurality of objects as tracks with a first capture time and/or a first capture location, anddetecting a plurality of objects in further images and identifying each of the detected objects as an object assigned to the respective stored track, wherein the respective track is updated by the current position of the identified object and, in the respective further images, objects detected for the first time are stored with assigned tracks.2. The method according to claim 1 , comprising determining a number of stored tracks for at least one of the images.3. The method according to claim 1 , comprising: storing and outputting at least one list of objects; and using the list to establish objects crossing counting lines claim 1 , to assess re-entrants into a sub-region of the images claim 1 , to separately assess the object numbers in a number of sub-regions of the images claim 1 , to count people standing in queues claim 1 , to create heat maps claim 1 , and/or to create statistics concerning residence period.4. The method according to comprising outputting the list in a manner chronologically staggered with respect to the storing of said list.5. The method according to claim 3 , comprising outputting a plurality of lists in a chronologically staggered manner claim 3 , wherein the plurality of lists are further processed together claim 3 , in such a way that the lists are used together to establish objects crossing counting lines claim 3 , to assess re-entrants into a sub-region of the images claim 3 , to separately evaluate the object numbers in a plurality of sub-regions of the images claim 3 , to count people standing in queues claim 3 , to ...

Подробнее
06-02-2014 дата публикации

NUMBER OF PERSONS MEASUREMENT DEVICE

Номер: US20140037147A1
Принадлежит: Panasonic Corporation

A person extraction unit extracts a person from an image input into an image input unit. An attribute extraction unit obtains an attribute of the person extracted by the person extraction unit. A motion path creation unit creates a motion path of the person from positional information within the image of the person extracted by the person extraction unit. A measurement reference coordinate setting unit sets a measurement line (a first measurement line to a third measurement line) for the motion path corresponding to the person according to the attribute of the person extracted by the attribute extraction unit. A people number counting unit counts the number of people based on positional relation between the motion path of the person created by the motion path creation unit and the measurement line set within the image. 1. A people counting device comprising:an image input unit which inputs an image;a person extraction unit which extracts a person from the image input into the image input unit;a motion path creation unit which creates a motion path of the person from positional information in the image of the person extracted by the person extraction unit;a people number counting unit which counts a number of people based on a positional relation between the motion path of the person created by the motion path creation unit and a measurement reference coordinate set within the image;an attribute extraction unit which obtains an attribute of the person extracted by the person extraction unit; anda measurement reference coordinate setting unit which sets the measurement reference coordinate for the motion path corresponding to the person according to the attribute of the person extracted by the attribute extraction unit.2. The people counting device according to claim 1 ,wherein the attribute extraction unit extracts age as the attribute, andwherein when the age is determined to be lower than a first predetermined age or higher than a second predetermined age, the ...

Подробнее
13-02-2014 дата публикации

NEIGHBORING VEHICLE DETECTING APPARATUS

Номер: US20140044311A1
Автор: TAKAHASHI Yoshihiko
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

A neighboring vehicle detecting device according to the invention includes: a neighboring vehicle detecting part configured to detect a neighboring vehicle behind an own vehicle; a curved road detecting part configured to detect information related to a curvature radius of a curved road; a storing part configured to store a detection result of the curved road detecting part; and a processing part configured to set a detection target region behind the own vehicle based on the detection result of the curved road detecting part which is stored in the storing part and is related to the curved road behind the own vehicle, wherein the processing part detects the neighboring vehicle behind the own vehicle based on a detection result in the set detection target region by the neighboring vehicle detecting part, the neighboring vehicle traveling on a particular lane which has a predetermined relationship with a traveling lane of the own vehicle. 16-. (canceled)7. A neighboring vehicle detecting device , comprising:a neighboring vehicle detecting part configured to detect a neighboring vehicle behind an own vehicle;a curved road detecting part configured to detect information related to a curvature radius of a curved road at a predetermined cycle;a storing part configured to store a detection result of the curved road detecting part; anda processing part configured to retrieve from the storing part, based on information about a vehicle speed of the own vehicle, the information related to the curvature radius of the curved road which is located a predetermined distance behind a current position of the own vehicle, and set a detection target region behind the own vehicle based on the retrieved information, wherein the processing part detects the neighboring vehicle behind the own vehicle based on a detection result in the set detection target region by the neighboring vehicle detecting part, the neighboring vehicle traveling on a particular lane which has a predetermined ...

Подробнее
13-02-2014 дата публикации

Incremental network generation providing seamless network of large geographical areas

Номер: US20140044317A1

A system and method for generating a seamless road network of a large geographical area includes a plurality of GPS traces extending across a geographical area. A plurality of threads simultaneously employ entire traces to collectively generate the seamless road network. The method includes dividing the geographical area into tiles, with each trace extending across several tiles. Each thread can employ one of the traces while another thread employs another trace. Each thread employs an entire one of the traces extending across several tiles during a single step. A job scheduler and blocking table prevent threads from simultaneously processing traces having common tiles and disturbing one another. The threads employ an incremental map matching method, wherein the probe traces are compared to existing line segments of the digital map, and new line segments are created using the probe traces not matching the existing line segments.

Подробнее
20-02-2014 дата публикации

METHOD FOR DETECTING TARGET OBJECTS IN A SURVEILLANCE REGION

Номер: US20140049647A1
Автор: Ick Julia
Принадлежит: Hella KGaA Hueck & Co.

The present application presents methods and apparatuses for detecting target objects in an image sequence of a monitoring region. In some examples, such methods may include adjusting pixel values of images of the image sequence for interference components associated with at least one interfering object, generating the interference components associated with the at least one interfering object that is situated in the monitoring region, searching the image sequence for the target objects based on the adjusted pixel values, detecting a start of a predetermined sequence of motions associated with the interfering object, and computing an instantaneous position of the at least one interfering object during the predetermined sequence of motions, wherein adjusting the pixel values of the images is based upon the instantaneous position. 116.-. (canceled)17. A method for detecting target objects in an image sequence of a monitoring region , comprising:adjusting pixel values of images of the image sequence for interference components associated with at least one interfering object;generating the interference components associated with the at least one interfering object that is situated in the monitoring region;searching the image sequence for the target objects based on the adjusted pixel values;detecting a start of a predetermined sequence of motions associated with the interfering object; andcomputing an instantaneous position of the at least one interfering object during the predetermined sequence of motions, wherein adjusting the pixel values of the images is based upon the instantaneous position.18. The method of claim 17 , wherein the instantaneous position of the at least one interfering object is computed based upon a movement model claim 17 , wherein the movement model includes at least one model parameter.19. The method of claim 18 , wherein at least one of the at least one model parameters is determined during an initialization phase as an initial value.20. The ...

Подробнее
20-02-2014 дата публикации

METHOD AND SYSTEM FOR DETECTING SEA-SURFACE OIL

Номер: US20140050355A1
Автор: COBB Wesley Kenneth
Принадлежит: BEHAVIORAL RECOGNITION SYSTEMS, INC.

A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel. 1. A computer-implemented method for analyzing a scene depicted in an input stream of video frames , the method comprising: 'identifying one or more foreground blobs in the video frames, wherein each foreground blob corresponds to one or more contiguous pixels of the video frame determined to depict sea-surface oil; and', 'for one or more of the video framesevaluating the one or more foreground blobs to derive expected patterns of observations of sea-surface oil.2. The method of claim 1 , wherein the input stream of video frames is generated by combining filtered video frames generated by three or more long-wavelength infrared (LWIR) cameras whose signals are filtered by band-pass filters allowing light in respective wavelength ranges to pass.3. The method of claim 2 , wherein the wavelength ranges are substantially 8-9 micrometers claim 2 , 8-11.5 micrometers claim 2 , and 8-13 micrometers.4. The method of claim 2 , wherein the signals of the LWIR cameras are further filtered by one or more polarizing filters.5. The method of claim 2 , wherein the LWIR cameras are mounted to observe an area surrounding an offshore oil platform.6. The method of claim 2 , wherein combining the video frames includes determining ...

Подробнее
20-02-2014 дата публикации

MULTI-MODE VIDEO EVENT INDEXING

Номер: US20140050356A1

Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. 1. A method for multi-mode video event indexing , the method comprising:determining by a processor a level of density of foreground object activity detected with respect to video input images from a video stream input relative to an entirety of the image;in response to the determined level of density of the detected foreground object activity meeting a threshold density value, the processor selecting from a plurality of video analytics modes and applying an object tracking-based analytic mode to the detected foreground object activity of the video input images to track a foreground object; andin response to the determined level of density of the detected foreground object activity not meeting the threshold density value, the processor selecting from the plurality of video analytics modes and applying a non-object tracking-based analytic mode to the detected foreground object activity of the video input images to determine object movement from extracted foreground object appearance attributes without tracking the foreground object.2. The method of claim 1 , further comprising:integrating computer-readable program code into a computer system comprising the processor, a computer readable memory and a computer readable storage medium, wherein the computer readable program code is embodied on the computer ...

Подробнее
20-02-2014 дата публикации

ROUTE CHANGE DETERMINATION SYSTEM AND METHOD USING IMAGE RECOGNITION INFORMATION

Номер: US20140050362A1
Принадлежит: PLK TECHNOLOGIES CO., LTD.

Provided is a route change determination system and method using image recognition information, which is capable of extracting position information having high precision similar to that of a high-precision DGPS device, while using a low-precision GPS device, in order to determine a change of a traveling route. 1. A route change determination system using image recognition system , comprising:a GPS module;an image recognition module having a line recognition function;a road map storage module configured to store road map information and route change possible section information for changing a route of a vehicle;a road map receiving module configured to receive the road map information; andan information processing module configured to determine whether the route is changed or not, based on line recognition information acquired through the image recognition module and the route change possible section information.2. The route change determination system of claim 1 , wherein the road map information stored in the road map storage module comprises line characteristic information claim 1 ,the information processing module further comprises an information matching unit configured to calculate a traveling lane by matching the line recognition information to the line characteristic information, andwhen the image recognition module recognizes that a part of the vehicle is positioned over a route change line, the information matching unit determines that a lane change for the route change is being performed.3. The route change determination system of claim 1 , wherein the road map information stored in the road map storage module comprises line characteristic information claim 1 ,the information processing module further comprises an information matching unit configured to calculate a traveling lane by matching the line recognition information to the line characteristic information, andwhen the image recognition module recognizes that the entire vehicle departs from the route ...

Подробнее
06-03-2014 дата публикации

MOTION-VALIDATING REMOTE MONITORING SYSTEM

Номер: US20140064560A1
Принадлежит:

A method of autonomously monitoring a remote site, including the steps of locating a primary detector at a site to be monitored; creating one or more geospatial maps of the site using an overhead image of the site; calibrating the primary detector to the geospatial map using a detector-specific model; detecting an object in motion at the site; tracking the moving object on the geospatial map; and alerting a user to the presence of motion at the site. In addition thermal image data from a infrared cameras, rather than optical/visual image data, is used to create detector-specific models and geospatial maps in substantially the same way that optical cameras and optical image data would be used. 1. A method of autonomously tracking an object in motion at a remote site , including:providing a first detector for a site to be monitored, wherein the first detector is adapted to capture terrain data of a first portion of the site;defining a three-dimensional (3D) terrain model modeling substantially all terrain of the site, the 3D terrain model associating 3D coordinates with geospatial locations of the site;detecting a change in a portion of the terrain of the first portion of the site based on the terrain data captured by the first detector by evaluating one or more dynamically-updated pixels;determining that the change in the portion of the terrain, corresponds to an object in motion;determining a first location of the object in motion as defined by a first set of 3D coordinates of the 3D terrain model;determining an expected second location of the object in motion defined by a second set of 3D coordinates;directing the first detector to change the set of detector parameters in preparation for detection of the object in motion at the expected second location, prior to the arrival of the object in motion, thereby causing the object in motion to remain within a field of detection of the first detector;causing the first detector to detect the object in motion at the ...

Подробнее
06-03-2014 дата публикации

Object Information Derived from Object Images

Номер: US20140064561A1
Принадлежит: NANT HOLDINGS IP LLC

Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image.

Подробнее
06-03-2014 дата публикации

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

Номер: US20140064570A1
Автор: Miyakoshi Hidehiko
Принадлежит: TOSHIBA TEC KABUSHIKI KAISHA

In accordance with one embodiment, an information processing apparatus includes an area specification module configured to specify a sufficiency area meeting a specified temperature condition from a thermography representing the temperature distribution in an image capturing area, an extraction module configured to extract feature amount from a specified area in a captured image of the image capturing area based on the sufficiency area specified by the area specification module and an object recognition module configured to recognize an object contained in the captured image using the feature amount extracted by the extraction module. 1. An information processing apparatus , comprising:an area specification module configured to specify a sufficiency area meeting a specified temperature condition from a thermography representing the temperature distribution in an image capturing area;an extraction module configured to extract feature amount from a specified area in a captured image of the image capturing area based on the sufficiency area specified by the area specification module; andan object recognition module configured to recognize an object contained in the captured image module the feature amount extracted by the extraction module.2. The information processing apparatus according to claim 1 , whereinthe area specification unit specifies an area of which at least one of a shape or an arrangement position of an area meeting the temperature condition meets a specified condition as a sufficiency area.3. The information processing apparatus according to claim 1 , whereinthe area specification unit specifies the sufficiency area as an elimination area eliminated from an object subjected to feature amount extraction;the extraction unit extracts the feature amount from an area in which the elimination area is eliminated in the captured image.4. The information processing apparatus according to any one of claim 1 , whereinthe area specification unit specifies the ...

Подробнее
13-03-2014 дата публикации

DRIVER ASSISTANCE SYSTEM FOR VEHICLE

Номер: US20140071285A1
Принадлежит: DONNELLY CORPORATION

A driver assistance system for a vehicle includes a forward facing camera and a processor operable to process image data captured by the camera. Responsive to processing of captured image data, the driver assistance system is operable to determine a lane along which the vehicle is traveling and to detect oncoming vehicles approaching the vehicle in another lane that is to the right or left of the determined lane along which the vehicle is traveling. The driver assistance system is operable to control, at least in part, a light beam emanating from a headlamp of the vehicle and adjusts the light beam emanating from the headlamp to limit directing beam light towards the eyes of a driver of the detected oncoming vehicle. Responsive to processing of captured image data, the driver assistance system is operable to provide lane departure warning to a driver of the vehicle. 1. A driver assistance system for a vehicle , said driver assistance system comprising:a forward facing camera disposed at a windshield of a vehicle equipped with said driver assistance system, said forward facing camera having a forward field of view through the windshield of the equipped vehicle;wherein said forward facing camera is operable to capture image data;a processor operable to process captured image data;wherein, responsive to processing of captured image data, said driver assistance system is operable to determine a lane along which the equipped vehicle is traveling;wherein, responsive to processing of captured image data, said driver assistance system is operable to detect oncoming vehicles approaching the equipped vehicle in another lane that is to the right or left of the determined lane along which the equipped vehicle is traveling;wherein said driver assistance system is operable to determine whether a detected oncoming vehicle is in a lane to the right of the determined lane along which the equipped vehicle is traveling or in a lane to the left of the determined lane along which the ...

Подробнее
13-03-2014 дата публикации

SEMANTIC REPRESENTATION MODULE OF A MACHINE LEARNING ENGINE IN A VIDEO ANALYSIS SYSTEM

Номер: US20140072206A1
Принадлежит:

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames. 1receiving input data describing one or more objects detected in the scene, wherein the input data includes at least a classification for each of the one or more objects;identifying one or more primitive events, wherein each primitive event provides a semantic value describing a behavior engaged in by at least one of the objects depicted in the sequence of video frames and wherein each primitive event has an assigned primitive event symbol;generating, for one or more objects, a primitive event symbol stream which includes the primitive event symbols corresponding to the primitive events identified for a respective object;generating, for one or more objects, a phase space symbol stream, wherein the phase space symbol stream describes a trajectory for a respective object through a phase space domain;combining the primitive event symbol stream and the phase space symbol stream for each respective object to form a first vector representation of that object; andpassing the first vector representations to a machine learning engine configured to identify patterns of behavior for each object classification from the first vector representation.. A method for processing data describing a scene depicted in a sequence of video frames, the method comprising: This application is a continuation of U.S. patent ...

Подробнее
20-03-2014 дата публикации

METHOD OF DETECTING CAMERA TEMPERING AND SYSTEM THEREOF

Номер: US20140078307A1
Принадлежит: SAMSUNG TECHWIN CO., LTD.

A method of detecting camera tempering and a system therefor are provided. The method includes: performing at least one of following operations: (i) detecting a size of a foreground in an image, and determining whether a first condition, that the size exceeds a first reference value, is satisfied, (ii) detecting change of a sum of the largest pixel value differences among pixel value differences between adjacent pixels in selected horizontal lines of the image, according to time, and determining whether a second condition, that the change lasts for a predetermined time period, is satisfied, and (iii) adding up a plurality of global motion vectors with respect to a plurality of images, and determining whether a third condition, that a sum of the global motion vectors exceeds a second reference value, is satisfied; and determining occurrence of camera tempering if at least one of the corresponding conditions is satisfied. 1. A method of detecting camera tempering from at least one image captured by at least one camera , the method comprising: (i) detecting a size of a foreground region in an image, and determining whether a first condition, that the size exceeds a first reference value, is satisfied;', '(ii) detecting change of a sum of the largest pixel value differences among pixel value differences between adjacent pixels in selected horizontal lines of the image, according to time, and determining whether a second condition, that the change lasts for a predetermined time period, is satisfied; and', '(iii) adding up a plurality of global motion vectors with respect to a plurality of images, and determining whether a third condition, that a sum of the plurality of global motion vectors exceeds a second reference value, is satisfied; and, 'performing at least one of following operationsdetermining that camera tempering has occurred if the first condition is satisfied when operation (i) is performed, if the second condition is satisfied when operation (ii) is ...

Подробнее
27-03-2014 дата публикации

Distance Measurement by Means of a Camera Sensor

Номер: US20140085428A1
Автор: Stählin Ulrich
Принадлежит: Continental Teves AG & Co. oHG

The invention relates to a distance-determining method by means of a camera sensor, wherein a distance between the camera sensor and a target object is determined on the basis of camera information, which method is defined by the fact that the camera information comprises a spatial extent of the region covered by the target object on a light sensor in the camera sensor. 1. A distance determination method by means of a camera sensor , comprising the steps of capturing a piece of camera information as a basis for determining a distance from the camera sensor to a target object , in that the camera information is related to a physical extent of a region covered by the target object on a photo sensor in the camera sensor.2. The method as claimed in claim 1 , further comprising in that the camera information related to a physical extent of a region covered by the target object on a photo sensor in the camera sensor is logically combined with a further piece of information about the physical extent of the target object.3. The method as claimed in claim 1 , further comprising in that the camera sensor is provided in the form of a mono camera sensor.4. The method as claimed in further comprising in that the camera sensor is provided in the form of a stereo camera sensor.5. The method as claimed in further comprising in that the camera information related to a physical extent of a region covered by the target object on the photo sensor is detected in the camera sensor along a horizontal axis.6. The method as claimed in further comprising in that the camera information related to a physical extent of a region covered by the target object on the photo sensor is detected in the camera sensor along a vertical axis.7. The method as claimed in further comprising in that the camera information related to a physical extent is an area of the region covered by the target object on the photo sensor in the camera sensor.8. The method as claimed in further comprising in that the further ...

Подробнее