Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 7480. Отображено 100.
25-10-2012 дата публикации

Method and system for identifying a license plate

Номер: US20120269398A1
Принадлежит: Xerox Corp

A license plate localization method and system based on a combination of a top-down texture analysis and a bottom-up connected component. An image with respect to a vehicle captured by an image capturing unit can be processed in order to locate and binarize a busy area. A black run with respect to the binarized image can be analyzed and classified and one or more objects (connected components) can be generated based on the black run classification. The objects can be further classified in accordance with their size utilizing a run-length based filter to filter out a non-text object. The leftover objects can then be spatially clustered and the uniformity and linearity of the clustered objects can be examined based on a linearity test. The clustered objects can be rejected if they fail the linearity test and the detected objects can further be matched with a plate edge characteristic in order to locate a license plate.

Подробнее
07-11-2013 дата публикации

Methods and systems for optimized parameter selection in automated license plate recognition

Номер: US20130294653A1
Принадлежит: Xerox Corp

A system and method for automatically recognizing license plate information, the method comprising receiving an image of a license plate, and generating a plurality of image processing data sets, wherein each image processing data set of the plurality of image processing data sets is associated with a score of a plurality of scores by a scoring process comprising determining one or more image processing parameters, generating the image processing data set by processing the image using the one or more image processing parameters, generating the score based on the image processing data, and associating the image processing data set with the score.

Подробнее
27-02-2014 дата публикации

Region refocusing for data-driven object localization

Номер: US20140056520A1
Принадлежит: Xerox Corp

A system and method are provided for segmenting an image. The method includes computing an image signature for an input image. One or more similar images are identified from a first set of images, based on the image signature of the input image and image signatures of images in the first set of images. The similar image or images are used to define a cropped region of the input image and a second image signature is computed, this time for the cropped region. One or more similar images are identified from a second set of images, based on the cropped image signature and the image signatures of images in the second set of images. The input image is segmented based on a segmentation map of at least one of the similar images identified in the second set of images.

Подробнее
27-03-2014 дата публикации

Detecting a label from an image

Номер: US20140086483A1
Принадлежит: Alibaba Group Holding Ltd

Determining a label from an image is disclosed, including: obtaining an image; determining a first portion of the image associated with a special mark; determining a second portion of the image associated with a label based at least in part on the first portion of the image associated with the special mark; and applying character recognition to the second portion of the image associated with the label to determine a value associated with the label.

Подробнее
04-01-2018 дата публикации

VEHICLE LOCALIZATION SYSTEM AND VEHICLE LOCALIZATION METHOD

Номер: US20180003505A1
Принадлежит:

A vehicle localization system is provided which localizes a system-equipped vehicle. The vehicle localization system determines a position of the system-equipped vehicle on a map using a map matching technique. The vehicle localization system also calculates a variation in arrangement of feature points (e.g., edge points) of a roadside object around the system-equipped vehicle in a captured image and corrects the calculated position of the system-equipped vehicle on the map using the variation in arrangement of the feature points. This ensures a required accuracy in localizing the system-equipped vehicle. 1. A vehicle localization system which works to localize on a road a system-equipped vehicle in which this system is mounted comprising:a vehicle position calculator which calculates a vehicle position that is a position of the system-equipped vehicle on a map using a map-matching technique;a feature detector which detects a given roadside object existing around the system-equipped vehicle and produces an output indicative thereof;a feature extractor which analyzes the output from the feature detector to extract feature points of the roadside object which are arranged in a lengthwise direction of the road;a variation calculator which calculates a variation in arrangement of the feature points in a lateral direction of the system-equipped vehicle; anda corrector which corrects the vehicle position, as calculated by the vehicle position calculator, based on the variation, as calculated by the variation calculator.2. A vehicle localization system as set forth in claim 1 , wherein the variation calculator obtains from the map a boundary line of the road extending in the lengthwise direction of the road and determines the variation in arrangement of the feature points using distances between the boundary line and the respective feature points in the lateral direction of the system-equipped vehicle.3. A vehicle localization system as set forth in claim 1 , further ...

Подробнее
07-01-2021 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM, AND RECORDING MEDIUM STORING PROGRAM

Номер: US20210004596A1
Принадлежит: FUJIFILM Corporation

Provided are an image processing apparatus, an image processing method, an image processing program, and a recording medium for the program, which make it possible to understand a unique way of spending time of a certain user at an event. 1. An image processing apparatus comprising:a reading device that reads, from a storage device in which a plurality of images captured at the same event are stored, the plurality of images; andan attribute detecting device that detects a first attribute obtained from a first image group captured by a first user, which is different from a second attribute obtained from a second image group captured by second users of which the number is larger than the number of the first user, among the plurality of images read by the reading device.2. The image processing apparatus according to claim 1 , further comprising:a subject detecting device that detects a subject from each of the plurality of images,wherein the attribute detecting device detects the first attribute obtained from a first subject detected from the first image group, which is different from the second attribute obtained from a second subject detected from the second image group.3. The image processing apparatus according to claim 2 ,wherein the number of the second subjects is larger than the number of the first subject.4. The image processing apparatus according to claim 1 , further comprising:a main subject detecting device that detects a main subject from each of the plurality of images,wherein the attribute detecting device detects the first attribute obtained from a first main subject detected from the first image group, which is different from the second attribute obtained from a second main subject detected from the second image group.5. The image processing apparatus according to claim 4 ,wherein the number of the second main subjects is larger than the number of the first main subject.6. The image processing apparatus according to claim 1 ,wherein the first ...

Подробнее
07-01-2021 дата публикации

REAL TIME OBJECT SURFACE IDENTIFICATION FOR AUGMENTED REALITY ENVIRONMENTS

Номер: US20210004599A1
Принадлежит: Microsoft Technology Licensing, LLC

This disclosure describes how to identify objects in an augmented reality environment. More specifically, the various systems and methods described herein describe how an augmented reality device can recognize objects within a real world environment, determine where the object is located, and also identify the various surfaces of the object in real time or substantially real time. 120.-. (canceled)21. A method for recognizing a real world object for an augmented reality environment , comprising:receiving a plurality of images of a real world environment, each of the plurality of images being taken from a different viewpoint;analyzing the plurality of images to identify an object and a type of the object contained within the plurality of images, wherein identifying the object comprises associating an object label with a pixel or a group of pixels associated with the object, and wherein the object label identifies the type of the object;analyzing the plurality of images to identify a location of one or more edges of the object within the plurality of images, wherein the one or more edges of the object are associated with one or more surfaces of the object in the real world environment;projecting the determined location of the one or more surfaces of the object within the real world environment to a corresponding location in an enhanced depth map of an augmented reality environment;associating the object label to the corresponding location in the enhanced depth map; andgenerating a hologram within the enhanced depth map, wherein the hologram interacts with the object in the augmented reality environment based, at least in part, on the object label.22. The method of claim 21 , wherein projecting the determined location of the one or more surfaces of the object within the real world environment to the corresponding location in the enhanced depth map is performed using reverse ray tracing.23. The method of claim 21 , further comprising:based on the one or more surfaces of ...

Подробнее
07-01-2021 дата публикации

ROAD ENVIRONMENT MONITORING DEVICE, ROAD ENVIRONMENT MONITORING SYSTEM, AND ROAD ENVIRONMENT MONITORING PROGRAM

Номер: US20210004612A1
Автор: MISAWA Hideaki, Muto Kenji
Принадлежит:

A server device includes: a data collection unit that collects vehicle behavior data; a scene extraction unit that extracts, from the collected vehicle behavior data, driving scenes corresponding to the behavior of the vehicle and a scene feature amount of each of the driving scenes; an abnormality detection unit that calculates a degree of abnormality which represents an extent of deviation of the scene feature amount of each of the extracted driving scenes relative to a driving model, and detects a driving scene including a location of an abnormality using the calculated degree of abnormality; a section determination unit that extracts driving scenes satisfying a predetermined condition from among the detected driving scene and a plurality of driving scenes continuous to the driving scene, and determines, as an abnormal behavior section, a time range defined by the total continuation time of the extracted driving scenes; an image request unit that requests, from the vehicle, one or more captured images according to the determined abnormal behavior section; and a display control unit that performs control to display the images acquired from the vehicle according to the request. 1. A road environment monitoring device comprising:a data collection unit that collects vehicle behavior data which represents behavior of a vehicle and with which time and position are associated;a scene extraction unit that extracts, from the vehicle behavior data collected by the data collection unit, driving scenes corresponding to the behavior of the vehicle and a scene feature amount of each of the driving scenes;an abnormality detection unit that calculates a degree of abnormality which represents an extent of deviation of the scene feature amount of each of the driving scenes extracted by the scene extraction unit relative to a driving model which represents a characteristic of typical vehicle behavior data, and detects a driving scene including a location of an abnormality using the ...

Подробнее
04-01-2018 дата публикации

Systems and methods of using z-layer context in logic and hot spot inspection for sensitivity improvement and nuisance suppression

Номер: US20180005367A1
Принадлежит:

Systems and methods for removing nuisance data from a defect scan of a wafer are disclosed. A processor receives a design file corresponding to a wafer having one or more z-layers. The processor receives critical areas of the wafer and instructs a subsystem to capture corresponding images of the wafer. Defect locations are received and the design file is aligned with the defect locations. Nuisance data is identified using the potential defect location and the one or more z-layers of the aligned design file. The processor then removes the identified nuisance data from the one or more potential defect locations. 1. A method for removing nuisance data comprising:receiving, at a processor, a design file corresponding to a wafer, the design file having one or more z-layers;receiving, at the processor, one or more critical areas of the wafer;instructing an image data acquisition subsystem to capture one or more images corresponding to the one or more critical areas of the wafer;receiving, at the processor, one or more potential defect locations in the one or more images corresponding to the one or more critical areas of the wafer;aligning, using the processor, the design file with the one or more potential defect locations corresponding to the one or more critical areas of the wafer;identifying, using the processor, nuisance data in the one or more potential defect locations based on each potential defect location and the one or more z-layers of the aligned design file; andremoving, using the processor, the identified nuisance data from the one or more potential defect locations.2. The method of claim 1 , further comprising:analyzing the design file, using the processor, to determine the one or more critical areas of the wafer based on pre-determined design rules.3. The method of claim 1 , wherein the nuisance data is identified based on whether the location of each potential defect location is proximal to pattern data in each z-layer of the aligned design file.4. The ...

Подробнее
03-01-2019 дата публикации

IMAGE DATA PROCESSING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM

Номер: US20190005674A1
Автор: Chang Chifeng

Embodiments of the present disclosure provide an image data processing method, an image data processing apparatus, an electronic device and a storage medium. The image data processing method includes: during a preset first collecting time period, collecting regularly a plurality of first user images corresponding to a target user, and extracting first part image data in each of the plurality of first user images; recording position information of the first part image data in each of the plurality of first user images on a display interface; if it is determined that the target user is at a preset stable state, performing a statistical processing to obtain a total movement times corresponding to the first part image data within a preset statistical time period; and if the total movement times reaches a preset threshold, performing an image processing on second part image data on a current display interface. 1. An image data processing method , comprising:during a preset first collecting time period, collecting regularly a plurality of first user images corresponding to a target user, and extracting first part image data in each of the plurality of first user images;recording position information of the first part image data in each of the plurality of first user images on a display interface;when it is determined that the target user is at a preset stable state according to the position information of the first part image data in each of the plurality of first user images on the display interface, performing a statistical processing to obtain a total movement times corresponding to the first part image data within a preset statistical time period; andwhen the total movement times corresponding to the first part image data reaches a preset threshold, performing an image processing on second part image data on a current display interface.2. The image data processing method according to claim 1 , wherein the first part image data comprises image data of a face organ.3. ...

Подробнее
03-01-2019 дата публикации

VEHICLE DETERMINATION APPARATUS, VEHICLE DETERMINATION METHOD, AND COMPUTER READABLE MEDIUM

Номер: US20190005814A1
Принадлежит: Mitsubishi Electric Corporation

A direction identification unit () identifies a traveling direction in which a surrounding vehicle travels in a partial region of a region indicated by image information () obtained by photographing using a camera (). A feature amount acquisition unit () acquires a reference feature amount () being a feature amount computed from a reference image corresponding to the identified traveling direction. A vehicle determination unit () computes an image feature amount being a feature amount of the image information of the partial region and compares the computed image feature amount with the acquired reference feature amount (), thereby determining whether the surrounding vehicle is present in the partial region. 110-. (canceled)11. A vehicle determination apparatus comprising:processing circuitry to:identify a traveling direction in which a surrounding vehicle travels in a partial region of a region indicated by image information obtained by photographing a range including a parallel lane with a same traveling direction as a traveling direction of a vehicle and an opposite lane with a traveling direction opposite to the traveling direction of the vehicle by a camera mounted on the vehicle, based on a position of the partial region in a horizontal direction of the region indicated by the image information;acquire a reference feature amount being a feature amount computed from a reference image corresponding to the identified traveling direction; andcompute an image feature amount being a feature amount of the image information of the partial region and compare the computed image feature amount with the acquired reference feature amount, thereby determining whether or not the surrounding vehicle is present in the partial region.12. A vehicle determination apparatus comprising:processing circuitry to:identify a traveling direction in which a surrounding vehicle travels in a partial region of a region indicated by image information obtained by photographing by a camera; ...

Подробнее
20-01-2022 дата публикации

SYSTEMS AND METHODS FOR IDENTIFYING A SERVICE QUALIFICATION OF A MULTI-UNIT BUILDING BASED ON EXTERIOR ACCESS

Номер: US20220019793A1
Принадлежит: VERIZON PATENT AND LICENSING INC.

A device may receive building location information associated with a multi-unit building. The device may obtain an image that depicts the multi-unit building. The device may process, using a building analysis model, the image to identify exterior access features of the multi-unit building. The building analysis model may be trained based on a plurality of historical images of other exterior access features. The device may determine, using a scoring system and based on a configuration of exterior access features that are identified by the building analysis model, an exterior accessibility score of the unit. The device may perform, based on the exterior accessibility score, an action associated with qualifying the unit for installation of a service that involves access, from the unit, to an exterior of the multi-unit building. 1. A method , comprising:receiving, by a device, a service request to qualify a unit of a multi-unit building to receive a service;using, by the device and based on building location information in the service request, a geographical information system to obtain an image that depicts a facade of the multi-unit building;processing, by the device and using a building analysis model, the image to identify exterior access features of the multi-unit building;determining, by the device and based on a configuration of exterior access features that are identified by the building analysis model, an exterior accessibility score of the unit; 'wherein the service qualification metric is associated with a capability of receiving the service within the unit; and', 'determining, by the device and based on the exterior accessibility score, a service qualification metric for the unit,'}performing, by the device, an action associated with the service qualification metric.2. The method of claim 1 , wherein the building location information comprises an address of the multi-unit building and the image is associated with a street view of the multi-unit building from ...

Подробнее
20-01-2022 дата публикации

IMAGE PROCESSING SYSTEM, APPARATUS, METHOD, AND STORAGE MEDIUM

Номер: US20220019835A1
Автор: Nanaumi Yoshihito
Принадлежит:

Character recognition processing is executed on a document image, and a candidate segmentation point is specified in a character string of a recognition result of the character recognition processing. In response to a user specifying a desired position on the document image displayed, a character string corresponding to the specified position is set as an output target and the candidate segmentation point is displayed. In a case where the candidate segmentation point is operated by the user, the character string set as the output target is changed to a character string obtained by segmentation based on the operated candidate segmentation point. 1. A system comprising:a recognition unit configured to execute character recognition processing on a document image;a candidate segmentation unit configured to specify a candidate segmentation point in a character string of a recognition result of the character recognition processing;a display unit configured to display the document image, set a character string corresponding to a position specified by a user on the displayed document image as an output target, and display the candidate segmentation point; anda change unit configured to change, in a case where the displayed candidate segmentation point is operated by the user, the character string set as the output target to a character string based on the candidate segmentation point.2. The system according to claim 1 , further comprising:a text segmentation unit configured to segment the character string of the recognition result, using a regular expression definition which associates a regular expression with a parameter relating to a whitespace character,wherein the candidate segmentation unit specifies the candidate segmentation point in the character string of the recognition result and in a character string obtained by segmentation performed by the text segmentation unit.3. The system according to claim 1 , wherein the display unit sets the character string ...

Подробнее
11-01-2018 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PICKUP APPARATUS, AND IMAGE PROCESSING METHOD

Номер: US20180012058A1
Автор: Takahashi Riuma
Принадлежит:

Provided is an image processing apparatus, including: an acquisition unit configured to acquire information on a layer boundary in tomographic structure of a current subject to be inspected; a determination unit configured to determine a depth range relating to a current en-face image of the subject to be inspected based on information indicating a depth range relating to a past en-face image of the subject to be inspected and the information on the layer boundary; and a generation unit configured to generate the current en-face image through use of data within the depth range relating to the current en-face image among pieces of three-dimensional data acquired for the current subject to be inspected. 1. An image processing apparatus , comprising:an acquisition unit configured to acquire information on a layer boundary in tomographic structure of a current subject to be inspected;a determination unit configured to determine a depth range relating to a current en-face image of the subject to be inspected based on information indicating a depth range relating to a past en-face image of the subject to be inspected and the information on the layer boundary; anda generation unit configured to generate the current en-face image through use of data within the depth range relating to the current en-face image among pieces of three-dimensional data acquired for the current subject to be inspected.2. An image processing apparatus according to claim 1 , wherein the image processing apparatus is configured to use information on a predetermined layer boundary as the information indicating the depth range relating to the past en-face image.3. An image processing apparatus according to claim 1 , wherein the image processing apparatus is configured to use a distance from a predetermined layer boundary as the information indicating the depth range relating to the past en-face image.4. An image processing apparatus according to claim 1 , wherein the image processing apparatus is ...

Подробнее
09-01-2020 дата публикации

VEHICULAR ELECTRONIC DEVICE AND OPERATION METHOD THEREOF

Номер: US20200012282A1
Принадлежит:

Disclosed is an operation method of a vehicular electronic device, including receiving at least one image data from at least one camera installed in a vehicle, by at least one processor, generating a common feature map based on the image data using a convolutional neural network (CNN), by the at least one processor, and providing the common feature map to each of an object detection network, a bottom network, and a three dimensional network, by the at least one processor. 1. An operation method of a vehicular electronic device , the method comprising:receiving at least one image data from at least one camera installed in a vehicle, by at least one processor;generating a common feature map based on the image data using a convolutional neural network (CNN), by the at least one processor; andproviding the common feature map to each of an object detection network, a bottom network, and a three-dimensional network, by the at least one processor.2. The method of claim 1 , wherein the CNN includes a plurality of convolutional layers and at least one pooling layer.3. The method of claim 1 , further comprising extracting a first feature map for detecting an object based on the common feature map using the object detection network claim 1 , by the at least one processor.4. The method of claim 3 , further comprising:predicting a bounding box of the object based on the first feature map, by the at least one processor; andpredicting a type of the object based on the first feature map, by the at least one processor.5. The method of claim 1 , further comprising extracting a second feature map for detecting a bottom based on the common feature map using the bottom network claim 1 , by the at least one processor.6. The method of claim 5 , further comprising:performing upsampling on the second feature map, by the at least one processor; andpredicting a free space and a bottom point of the object based on the upsampled second feature map, by the at least one processor.7. The method of ...

Подробнее
14-01-2021 дата публикации

METHOD AND SYSTEM FOR 3D CORNEA POSITION ESTIMATION

Номер: US20210012105A1
Принадлежит: Tobii AB

There is provided a method, system, and non-transitory computer-readable storage medium for performing three-dimensional, 3D, position estimation for the cornea center of an eye of a user, using a remote eye tracking system, wherein the position estimation is reliable and robust also when the cornea center moves over time in relation to an imaging device associated with the eye tracking system. This is accomplished by generating, using, and optionally also updating, a cornea movement filter, CMF, in the cornea center position estimation. 1) A method for performing three-dimensional , 3D , position estimation for the cornea center of an eye of a user , using a remote eye tracking system , when the cornea center moves over time in relation to an imaging device associated with the eye tracking system , the method comprising:generating, using processing circuitry associated with the eye tracking system, a cornea movement filter, CMF, comprising an estimated initial 3D position and an estimated initial 3D velocity of the cornea center of the eye at a first time instance; a first two-dimensional, 2D, glint position in an image captured at a second time instance by applying the cornea movement filter, CMF, wherein the predicted first glint position represents a position where a first glint is predicted to be generated by a first illuminator associated with the eye tracking system; and', 'a second 2D glint position in an image captured at the second time instance by applying the cornea movement filter, CMF, wherein the predicted second glint position represents a position where a glint is predicted to be generated by a second illuminator associated with the eye tracking system;, 'predicting, using the processing circuitry identifying at least one first candidate glint in a first image captured by the imaging device at the second time instance wherein the first image comprises at least part of the cornea of the eye and at least one glint generated by the first illuminator; ...

Подробнее
14-01-2021 дата публикации

SEGMENTING VIDEO STREAM FRAMES

Номер: US20210012114A1
Принадлежит:

The present invention extends to methods, systems, and computer program products for segmenting video stream frames. In one aspect, video stream frames are segmented into more relevant segments (e.g., including roadway) and less relevant segments (e.g., not including roadway). Different segments can be handled differently. For example, more relevant segments can be processed to identify vehicles, identify events, etc. and less relevant segments may be ignored. Accordingly, resources can be utilized more efficiently. In one aspect, a binary mask is generated from object data in one or more frames. The binary mask is applied to further frames blocking out less relevant frame segments in the further frames. 1. A method comprising:accessing a frame from a camera video stream; detecting objects of a plurality of different object types in the frame including a first instance of an object type and a first instance of another object type;', 'assigning a different color to different objects in the frame based on object type, including assigning a first color to the first instance of the object type and assigning a second color to the first instance of the other object type;', 'determining the first instance of the object type is within the first instance of the other object type; and', 're-assigning the second color to the first instance of the object type;, 'generating a first object color mask from contents of the frame, includingaccessing another frame from the camera video stream; detecting other objects of the plurality of the different object types in the other frame including a second instance of the object type and a second instance of the other object type;', 'assigning a different color to different objects in the other frame based on the object type, including assigning the first color to the second instance of the object type and assigning a second color to the second instance of the other object type;', 'determining the second instance of the object type is ...

Подробнее
10-01-2019 дата публикации

Regulation method, terminal equipment and non-transitory computer-readable storage medium for automatic exposure control of region of interest

Номер: US20190012776A1
Автор: Kai Liu

Provided are a regulation method, terminal equipment, and non-transitory computer-readable storage medium for automatic exposure control (AEC) of a region of interest. In the method, a luminance histogram of each color channel in a region of interest is obtained based on statistics on luminance of a plurality of sub-region blocks in the region of interest; a first luminance of the each color channel is determined according to the luminance histogram of the each color channel in the region of interest and the corresponding relationship between the luminance and the number of the sub-region blocks; a reference luminance is determined based on the first luminance of the each color channel, the reference luminance corresponding to a reference color channel; a target luminance corresponding to the present AEC luminance is obtained; and a luminance regulation is performed on the reference color channel according to the target luminance.

Подробнее
14-01-2021 дата публикации

LEARNING DATASET CREATION METHOD AND DEVICE

Номер: US20210012524A1
Принадлежит:

Provided are a method and a device that can efficiently generate a training dataset. Object information is associated with a visual marker, a training dataset generation jig that is configured from a base part and a marker is used, said base part being provided with an area that serves as a guide for positioning a target object and said marker being fixed on the base part, the target object is positioned using the area as a guide and in this condition an image group of the entire object including the marker is acquired, the object information that was associated with the visual marker is acquired from the acquired image group, a reconfigured image group is generated from this image group by performing a concealment process on a region corresponding to the visual marker or the training dataset generation jig, a bounding box is set in the reconfigured image group on the basis of the acquired object information, information relating to the bounding box, the object information, and estimated target object position information and posture information are associated with a captured image, and a training dataset for performing object recognition and position/posture estimation for the target object is generated. 129-. (canceled)30. A training dataset generation method for conducting object recognition and a position or a posture estimation of an object , the method comprising;associating object information of an object to a visual marker;acquiring an image group of the object including the visual marker; andcalculating posture information of the object or position information of the object or both, using a base portion of the object and the image group. The present invention relates to an automated method for generating a training dataset in object recognition and position/posture estimation by machine learning.Conventionally, robots equipped with artificial intelligence (hereinafter referred to as “AI”) have been used as a tool for automation of operations in factories ...

Подробнее
09-01-2020 дата публикации

OBJECT DETECTION APPARATUS, CONTROL METHOD IMPLEMENTED BY OBJECT DETECTION APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM

Номер: US20200012847A1
Принадлежит: FUJITSU LIMITED

An object detection apparatus includes: a camera configured to capture an image of an object; one or more of sensor devices each of which is configured to detect an environmental change; and a processor configured to (a): execute a determining process that includes, when any one of the one or more of sensor devices detects an environmental change, detecting a search starting point of the object based on at least one of a time corresponding to the detection and detection information from the sensor device, (b): execute an entry registering process that includes registering an entry with reference information when the object is detected, the entry including at least one of the time and the detection information and a direction in which the object is detected, wherein the determining process is configured to determine the direction toward which the camera is to be turned based on the reference information. 1. An object detection apparatus comprising:a camera configured to capture an image of an object;one or more of sensor devices, each of the one or more of sensor devices being configured to detect an environmental change; anda processor configured toexecute a determining process that includes, in a case where any one of the one or more of sensor devices detects an environmental change, detecting a search starting point of the object based on at least one of a time corresponding to the detection and detection information from the sensor device, the search starting point corresponding to a direction toward which the camera is turned,execute an entry registering process that includes registering an entry with reference information when the object is detected, the entry including at least one of the time and the detection information and a direction in which the object is detected,wherein the determining process is configured to determine the direction toward which the camera is to be turned based on the reference information.2. The object detection apparatus according ...

Подробнее
09-01-2020 дата публикации

APPARATUS AND METHOD FOR DETECTING FALLING OBJECT

Номер: US20200012871A1
Принадлежит: LG ELECTRONICS INC.

Disclosed are an apparatus and a method for detecting a fallen object which adjust a passenger's seat to easily pick up the fallen object in the vehicle. A fallen object detecting apparatus according to one embodiment of the present disclosure is an apparatus for detecting a fallen object in a vehicle which includes a camera configured to generate at least one image of an inside of the vehicle; an image identifier configured to identify at least one passenger and at least one object from the image, and to determine a location of the fallen object in response to a falling of the object in the vehicle; and a controller configured to provide the determined location of the fallen object via at least one component located in the vehicle and adjust a seat of the passenger based on the location of the fallen object and a condition of the passenger. 1. An apparatus for detecting a fallen object in a vehicle , the apparatus comprising:a camera configured to generate at least one image of an inside of the vehicle;an image identifier configured to identify at least one passenger and at least one object from the image, and to determine a location of the fallen object in response to a falling of the object in the vehicle; anda controller configured to provide the determined location of the fallen object via at least one component located in the vehicle and adjust a seat of the passenger based on the location of the fallen object and a condition of the passenger.2. The fallen object detecting apparatus according to claim 1 , wherein the controller is configured to set a range in which the passenger is capable of picking up the object based on the physical condition of the passenger including at least one of arm length claim 1 , leg length claim 1 , or hand length claim 1 , and in response to the location of the object being out of the set range claim 1 , adjust at least one of position claim 1 , height claim 1 , or angle of the seat of the passenger.3. The fallen object detecting ...

Подробнее
03-02-2022 дата публикации

OPERATION CONTROL DISPLAY METHOD AND APPARATUS BASED ON VIRTUAL SCENE

Номер: US20220032186A1
Принадлежит:

An operation control display method is provided to be applied to a computing device. The method includes: obtaining position information of a target virtual object in the virtual scene, the target virtual object being a virtual object controlled by a terminal; determining, based on the position information and at least one of virtual elements in the virtual scene, an element type of a target virtual element corresponding to the target virtual object; and displaying, in a control display region in the virtual scene, a target operation control corresponding to the element type of the target virtual element, the target operation control being configured to control the target virtual object to interact with the target virtual element 1. An operation control display method based on a virtual scene , applied to a computing device , the method comprising:obtaining position information of a target virtual object in the virtual scene, the target virtual object being a virtual object controlled by a terminal;determining, based on the position information and at least one of virtual elements in the virtual scene, an element type of a target virtual element corresponding to the target virtual object; anddisplaying, in a control display region in the virtual scene, a target operation control corresponding to the element type of the target virtual element, the target operation control being configured to control the target virtual object to interact with the target virtual element.2. The method according to claim 1 , wherein determining the element type of the target virtual element comprises:determining, based on the position information, a position index of a region indicated by the position information;obtaining, based on the position index, a region type of the region from a map index table corresponding to the virtual scene, the map index table including a position index of each region in the virtual scene and a region type of the each region; anddetermining an element type ...

Подробнее
03-02-2022 дата публикации

CONTEXT BASED MEDIA CURATION

Номер: US20220036079A1
Принадлежит:

A media curation system configured to perform operations that include, capturing an image at a client device, wherein the image includes a depiction of an object, identifying an object category of the object based on the depiction of the object within the image, accessing media content associated with the object category within a media repository, generating a presentation of the media content, and causing display of the presentation of the media content within the image at the client device. 1. A method comprising:causing display of an image that comprises a plurality of image features at a client device;identifying an object depicted by the image based on the plurality of image features of the image;selecting a category based on the identified object, the category corresponding with one or more media tags;accessing a set of media items from within a media repository based on the one or more media tags that correspond with the category; andcausing display of a presentation of the set of media items from the media repository at the client device.2. The method of claim 1 , wherein the causing display of the presentation of the set of media items further comprises:causing display of a notification in response to the accessing the set of media items from within the repository;receiving an input that selects the notification; andcausing display of the presentation of the set of media items in response to the input that selects the notification.3. The method of claim 2 , wherein the notification comprises a display of an indication of a number of media items among the set of media items.4. The method of claim 1 , wherein the presentation of the set of media items comprises a display of the set of media items in a horizontal array.5. The method of claim 1 , wherein the presentation of the set of media items comprises a display of the set of media items in a vertical array.6. The method of claim 1 , wherein the identifying the object depicted by the image based on the ...

Подробнее
03-02-2022 дата публикации

PEOPLE DETECTION AND TRACKING WITH MULTIPLE FEATURES AUGMENTED WITH ORIENTATION AND SIZE BASED CLASSIFIERS

Номер: US20220036109A1
Принадлежит:

This disclosure describes techniques to detect an object. The techniques include operations comprising: receiving an image captured by overhead camera; identifying a region of interest (ROI) of a plurality of regions within the image; selecting an object classifier from a plurality of object classifiers based on a position of the identified ROI relative to the overhead camera; and applying the selected. object classifier to the identified ROI; and detecting presence of the object within the ROI in response to applying the selected object classifier to the identified ROI. 1. A system for detecting an object in an image , the system comprising:a processor; and receiving an image captured by an overhead camera;', 'identifying a region of interest (ROI) of a plurality of regions within the image;', 'selecting an object classifier from a plurality of object classifiers based on a position of the identified ROI relative to the overhead camera, a first of the plurality of object classifiers being configured to detect a first feature of the object, and a second of the plurality of object classifiers being configured to detect a second feature of the object;', 'applying the selected object classifier to the identified ROI; and', 'detecting presence of the object within the ROI in response to applying the selected object classifier to the identified ROI., 'a memory for storing one or more instructions that, when executed by the processor, configure the processor to perform operations comprising2. The system of claim 1 , wherein the operations further comprise:associating a first region of the plurality of regions with a first subset of the object classifiers; andassociating a second region of the plurality of regions with a second subset of the object classifiers.3. The system of claim 2 , wherein the object classifiers are configured to detect human objects claim 2 , wherein the first region corresponds to a first rotation relative to the overhead camera and is within a ...

Подробнее
03-02-2022 дата публикации

Systems and Methods to Optimize Imaging Settings for a Machine Vision Job

Номер: US20220036586A1
Принадлежит:

Methods and systems for optimizing one or more imaging settings for a machine vision job are disclosed herein. An example method includes detecting, by one or more processors, an initiation trigger that initiates the machine vision job. The example method further includes, responsive to detecting the initiation trigger, capturing, by an imaging device, a first image of a target object in accordance with a first configuration of the one or more imaging settings. The example method further includes, responsive to capturing the first image of the target object, automatically adjusting, by the one or more processors, the one or more imaging settings to a second configuration that includes at least one different imaging setting from the first configuration; and capturing, by the imaging device, a second image of the target object in accordance with the second configuration of the one or more imaging settings. 1. A method for optimizing one or more imaging settings for a machine vision job , the method comprising:detecting, by one or more processors, an initiation trigger that initiates the machine vision job;responsive to detecting the initiation trigger, capturing, by an imaging device, a first image of a target object in accordance with a first configuration of the one or more imaging settings;responsive to capturing the first image of the target object, automatically adjusting, by the one or more processors, the one or more imaging settings to a second configuration that includes at least one different imaging setting from the first configuration; andcapturing, by the imaging device, a second image of the target object in accordance with the second configuration of the one or more imaging settings.2. The method of claim 1 , wherein the one or more imaging settings include one or more of (i) an aperture size claim 1 , (ii) an exposure length claim 1 , (iii) an ISO value claim 1 , (iv) a focus value claim 1 , (v) a gain value claim 1 , or (vi) an illumination control.3. ...

Подробнее
17-01-2019 дата публикации

SIMULATING IMAGE CAPTURE

Номер: US20190019021A1
Принадлежит:

The present disclosure relates to simulating the capture of images. In some embodiments, a document and a camera are simulated using a three-dimensional modeling engine. In certain embodiments, a plurality of images are captured of the simulated document from a perspective of the simulated camera, each of the plurality of images being captured under a different set of simulated circumstances within the three-dimensional modeling engine. In some embodiments, a model is trained based at least on the plurality of images which determines at least a first technique for adjusting a set of parameters in a separate image to prepare the separate image for optical character recognition (OCR). 1. A computer-implemented method for simulating the capture of images , comprising:simulating a document and a camera using a three-dimensional modeling engine;capturing a plurality of images of the simulated document from a perspective of the simulated camera, each of the plurality of images being captured under a different set of simulated circumstances within the three-dimensional modeling engine;training a model based at least on the plurality of images, wherein the trained model determines at least a first technique for adjusting a set of parameters in a separate image to prepare the separate image for optical character recognition (OCR).2. The computer-implemented method of claim 1 , wherein the simulated circumstances include at least one of: lighting; background; and camera pose.3. The computer-implemented method of claim 2 , wherein the camera pose includes yaw claim 2 , pitch claim 2 , roll claim 2 , and height.4. The computer-implemented method of claim 1 , further comprising:determining, based on the trained model, whether a quality of the separate image can be improved to an acceptable level for the OCR, wherein the quality of the separate image is based on one or more of the set of parameters.5. The computer-implemented method of claim 4 , wherein determining whether the ...

Подробнее
17-01-2019 дата публикации

METHOD AND APPARATUS FOR DETECTING A VEHICLE IN A DRIVING ASSISTING SYSTEM

Номер: US20190019041A1
Принадлежит:

The disclosure discloses a method for detecting a vehicle in a driving assisting system. The method for detecting a vehicle in a driving assisting system includes: obtaining an image to be detected, and determining the positions of lane lines in the image to be detected; determining a valid area in the image to be detected, according to the positions of the lane lines, and the velocity of the present vehicle; and determining a detected vehicle in the valid area according to T preset weak classifiers, and thresholds corresponding to the respective weak classifiers, wherein T is a positive integer. 1. A method for detecting a vehicle in a driving assisting system , the method comprises:obtaining an image to be detected, and determining positions of lane lines in the image to be detected;determining a valid area in the image to be detected, according to the positions of the lane lines, and a velocity of a present vehicle; anddetermining a detected vehicle in the valid area according to T preset weak classifiers, and thresholds corresponding to respective weak classifiers, wherein T is a positive integer.2. The method according to claim 1 , wherein determining the valid area in the image to be detected according to the positions of the lane lines claim 1 , and the velocity of the present vehicle comprises:determining an upper boundary of the valid area in the image to be detected according to points at which the land lines vanish; anddetermining a lower boundary of the valid area in the image to be detected according to the velocity of the present vehicle.3. The method according to claim 2 , wherein determining the lower boundary of the valid area in the image to be detected in the equation of:{'br': None, 'i': 'd=k*v;'}wherein d is a distance between a lower boundary of the image to be detected and the lower boundary of the valid area, k is a weight coefficient, and v is the velocity of the present vehicle.5. The method according to claim 4 , further comprising: ...

Подробнее
17-01-2019 дата публикации

SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR ANALYZING JPEG IMAGES FOR FORENSIC AND OTHER PURPOSES

Номер: US20190019282A1
Принадлежит:

Forensic method for identifying forged documents. For each of a stream of incoming jpeg images, using a processor configured for determining whether jpeg image/s is a replacement forgery by determining whether a first portion of individual image which resides at a known location (known likely to be replaced by forger) within the individual jpeg image has been replaced, including: indicator, face-djpg, for the first portion at known location; computing indicator, aka nonface-djpg, for a second portion of individual image which resides at a comparison location within the jpeg image known as unlikely to be replaced by a forger; and determining whether face-djpg and nonface-djpg fulfill predetermined logical criterion and deciding whether the individual jpeg image is a replacement forgery accordingly. 1. A forensic method for identifying at least some forged documents , the method comprising , for each of a stream of incoming JPEG images from among which forgeries are to be identified: [{'b': '1', 'computing a double-compression indicator, aka face-DJPG aka F, for the first portion at said known location;'}, {'b': '2', 'computing a double-compression indicator, aka nonface-DJPG aka F, for a second portion of said individual image which resides at a comparison location within the individual JPEG image which is known as a location unlikely to be replaced by a forger; and'}, 'determining whether face-DJPG and nonface-DJPG fulfill at least one predetermined logical criterion and deciding whether or not the individual JPEG image is a replacement forgery accordingly., 'using a processor configured for determining whether at least one individual JPEG image in said stream is a replacement forgery by determining whether a first portion of said individual image which resides at a known location (known as a location likely to be replaced by a forger) within at least the individual JPEG image has or has not been doctored aka replaced, including2. A method according to wherein said ...

Подробнее
16-01-2020 дата публикации

METHOD, APPARATUS, AND SYSTEM FOR MAPPING VULNERABLE ROAD USERS

Номер: US20200019627A1
Автор: STENNETH Leon
Принадлежит:

An approach is provided for mapping vulnerable road users. The approach, for example, involves receiving sensor data from at least one vehicle indicating a presence of at least one vulnerable road user. The sensor data indicates a detected location of the at least one vulnerable road user. The approach also involves map matching the detected location of the at least one vulnerable road user to at least one road node, link, and/or a segment thereof of a geographic database. The approach further involves generating a vulnerable road user attribute to indicate a probability of the presence of the at least one vulnerable road user based on the sensor data. The approach further involves storing the vulnerable road user attribute in the geographic database as an attribute of the at least one road node, link, and/or segment. 1. A computer-implemented method for generating vulnerable road user data comprising:receiving sensor data from at least one vehicle indicating a presence of at least one vulnerable road user, wherein the sensor data indicates a detected location of the at least one vulnerable road user;map matching the detected location of the at least one vulnerable road user to at least one road node, at least one road link, a segment of the at least one road link, or a combination thereof of a geographic database; andgenerating a vulnerable road user attribute to indicate a probability of the presence of the at least one vulnerable road user based on the sensor data; andstoring the vulnerable road user attribute in the geographic database as an attribute of the at least one road node, the at least one road link, the segment, or a combination thereof.2. The method of claim 1 , further comprising:processing map data for the at least one road node, the at least one road link, the segment, at least one other road node, at least one other road link, another segment, or a combination thereof to identify at least one map feature indicative of the presence of the at least ...

Подробнее
16-01-2020 дата публикации

DISPLAY CONTROL APPARATUS, DISPLAY CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Номер: US20200020081A1
Автор: Ogawa Seiji
Принадлежит:

A display control apparatus according to the present invention includes: an axis detection unit configured to detect, based on an image, a rotation axis about which a celestial body included in the image as an object rotates in response to the rotation of the earth; and a control unit configured to perform control such that a partial area of the image is displayed as a display area, and rotation display is performed by changing the display area while changing an angle corresponding to the display area around a position corresponding to the rotation axis detected by the axis detection unit. 1. A display control apparatus comprising at least one memory and at least one processor which function as:an axis detection unit configured to detect, based on an image, a rotation axis about which a celestial body included in the image as an object rotates in response to the rotation of the earth; anda control unit configured to perform control such that a partial area of the image is displayed as a display area, and rotation display is performed by changing the display area while changing an angle corresponding to the display area around a position corresponding to the rotation axis detected by the axis detection unit.2. The display control apparatus according to claim 1 , whereinthe at least one memory and at least one processor further function as a first determination unit configured to determine whether the image is an image captured in the Northern Hemisphere of the earth or an image captured in the Southern Hemisphere of the earth based on the image, andthe control unit performs control such that the rotation display in a first rotation direction is performed in a case where the first determination unit determines that the image is the image captured in the Northern Hemisphere, and the rotation display in a second rotation direction is performed in a case where the first determination unit determines that the image is the image captured in the Southern Hemisphere.3. The ...

Подробнее
21-01-2021 дата публикации

3D OBJECT SENSING SYSTEM

Номер: US20210019907A1
Автор: BUGOVICS JOZSEF
Принадлежит:

A 3D object sensing system includes an object positioning unit, an object sensing unit, and an evaluation unit. The object positioning unit has a rotatable platform and a platform position sensing unit. The object sensing unit includes two individual sensing systems which each have a sensing area. A positioning unit defines a positional relation of the individual sensing systems to one another. The two individual sensing systems sense object data of object points of the 3D object and provide the object data the evaluation unit. The evaluation unit includes respective evaluation modules for each of the at least two individual sensing systems, an overall evaluation module and a generation module. 15-. (canceled)7. The 3D object sensing system according to claim 6 , further comprising a housing claim 6 , said object positioning unit being located inside said housing.8. The 3D object sensing system according to claim 6 , further comprising an underfloor scanner for sensing object data of an interior space of the 3D object and making the object data of the interior space available to said evaluation unit for inclusion when generating the digital image.9. The 3D object sensing system according to claim 6 , further comprising an interior equipment scanner for sensing object data of an interior space of the 3D object and making the object data of the interior space available to said evaluation unit for inclusion when generating the digital image.10. The 3D object sensing system according to claim 6 , further comprising a comparison module including a database with data relating to a normative digital image claim 6 , said comparison module being configured for performing a comparison between the digital basic image and the normative digital image and generating a digital difference image. The invention relates to a 3D object sensing system for providing a digital image of the 3D object to be detected.From the state of the art it is basically known how to sense spatial ...

Подробнее
16-01-2020 дата публикации

Automatic Focusing Method and Apparatus Based on Region of Interest

Номер: US20200021747A1
Принадлежит:

An automatic focusing method and apparatus comprise the following steps: acquiring a target image that has been divided into blocks; acquiring the definition of each block, respectively; acquiring normalized central coordinates and a normalized size of a region of interest on the target image; respectively calculating a full width at half maximum coefficient in the horizontal direction and the vertical direction according to the normalized size; calculating a weight value of each block using a two-dimensional discrete Gaussian function according to the normalized central coordinates and the full width at half maximum coefficient; calculating a normalized overall definition of the target image according to the weight value and definition of each block; and focusing according to the normalized overall definition. The method and apparatus can automatically calculate a mask of the region of interest, thereby avoiding the occupying of storage space required when storing ROI mask data. 1. An automatic focusing method based on a region of interest , comprising the following steps:acquiring a target image that has been divided into blocks;acquiring the definition of each block, respectively;acquiring normalized central coordinates and a normalized size of a region of interest on the target image;respectively calculating a full width at half maximum coefficient in the horizontal direction and the vertical direction according to the normalized size;calculating a weight value of each block using a two-dimensional discrete Gaussian function according to the normalized central coordinates and the full width at half maximum coefficient;calculating a normalized overall definition of the target image according to the weight value and definition of each block; andfocusing according to the normalized overall definition.2. The method according to claim 1 , wherein claim 1 , the step of respectively calculating a full width at half maximum coefficient in the horizontal direction and ...

Подробнее
10-02-2022 дата публикации

Target Object Tracking Method and Apparatus, and Storage Medium

Номер: US20220044417A1
Принадлежит: SHENZHEN SENSETIME TECHNOLOGY CO., LTD.

The present disclosure relates to a target object tracking method and apparatus, an electronic device, and a storage medium. The method includes: obtaining a first reference image of a target object; determining time information and location information of the target object in an image to be analyzed according to the first reference image, the image to be analyzed including the time information and the location information; determining a trajectory of the target object according to the time information and the location information of the target object; and generating tracking information for tracking the target object according to the trajectory of the target object. Embodiments of the present disclosure obtain highly-accurate tracking information of the target object according to the trajectory of the target object determined in the image to be analyzed by using the first reference image of the target object, such that the success rate of target object tracking is improved. 117-. (canceled)18. A target object tracking method , comprising:obtaining a first reference image of a target object and determining identification information of the target object;determining time information and location information of the target object in an image to be analyzed according to the first reference image, the image to be analyzed comprising the time information and the location information;determining a trajectory of the target object according to the time information and the location information of the target object; andgenerating tracking information for tracking the target object according to the trajectory of the target object and the identification information of the target object, when it is unable to detect the target object in an identification image library according to the first reference image of the target object, determining a second reference image of the target object in the image to be analyzed, identification images in the identification image library comprising ...

Подробнее
10-02-2022 дата публикации

ANALYZING SCREEN COVERAGE OF A TARGET OBJECT

Номер: US20220044437A1
Автор: Gimpelson Kenneth
Принадлежит: Weta Digital Limited

Embodiments provide multi-angle screen coverage analysis. In some embodiments, a system obtains at least one image, where the at least one image is a computer graphics generated image, and where the at least one image comprises at least one target object. The system determines screen coverage information for the at least one target object, where the screen coverage information is based on a portion of a screen that is covered by the at least one target object. The system determines depth information for the at least one target object. The system determines an asset detail level for the at least one target object based on the screen coverage information and the depth information, where the asset detail level is adjustable based on the screen coverage information. The system then stores the asset detail level in a database. 1. A computer-implemented method performed by one or more digital processors for multi-angle screen coverage analysis , the method comprising:obtaining at least one image, wherein the at least one image is a computer graphics generated image, and wherein the at least one image comprises at least one target object;determining screen coverage information for the at least one target object, wherein the screen coverage information is based on a portion of a screen that is covered by the at least one target object;determining depth information for the at least one target object;determining an asset detail level for the at least one target object based on the screen coverage information and the depth information, wherein the asset detail level is adjustable based on the screen coverage information; andstoring the asset detail level in a database.2. The method of claim 1 , wherein different versions of the at least one target object appears in different images.3. The method of claim 1 , wherein each instance of the at least one target object in different images has a different asset detail level based on screen coverage information and depth information ...

Подробнее
24-01-2019 дата публикации

HUMAN FLOW ANALYSIS METHOD, HUMAN FLOW ANALYSIS APPARATUS, AND HUMAN FLOW ANALYSIS SYSTEM

Номер: US20190026560A1
Автор: Nishikawa Yuri, OZAWA Jun
Принадлежит:

A human flow analysis apparatus includes a movement information acquirer that acquires movement information, the movement information representing a history of movement within a predetermined space by multiple persons moving within the predetermined space, an associated-nodes extractor that, based on the movement information, extracts at least two persons assumed to be moving in association with each other, an association information identifier that identifies association information, the association information indicating what association the extracted at least two persons have with each other, a node fusion determiner that, based on the identified association information, determines whether to group the at least two persons together, and a behavior predictor that predicts a behavior of the at least two persons who have been determined to be grouped together. 1. A human flow analysis method for a human flow analysis apparatus , the human flow analysis method comprising:acquiring movement information, the movement information representing a history of movement within a predetermined space by a plurality of persons moving within the predetermined space;extracting, based on the acquired movement information, at least two persons assumed to be moving in association with each other;identifying association information, the association information indicating what association the extracted at least two persons have with each other;determining, based on the identified association information, whether to group the at least two persons together; andpredicting a behavior of the at least two persons who have been determined to be grouped together.2. The human flow analysis method according to claim 1 ,wherein the extracting includes extracting the at least two persons whose distance from each other has been less than or equal to a predetermined distance for a predetermined period of time.3. The human flow analysis method according to claim 2 ,wherein the extracting includes ...

Подробнее
28-01-2021 дата публикации

LANE LINE POSITIONING METHOD AND APPARATUS, AND STORAGE MEDIUM THEREOF

Номер: US20210025713A1
Автор: MA Yanhai

This disclosure is directed to a lane line positioning method and apparatus. The method includes obtaining inertial information, target traveling information, and first position information of a vehicle. The inertial information comprises information measured by an inertial measurement unit of the vehicle. The target traveling information comprises traveling information of the vehicle acquired at a first moment. The first position information comprises a position of the vehicle at the first moment. The method includes determining second position information according to the target traveling information and the first position information and determining third position information of the vehicle at a second moment based on the inertial information of the vehicle and the second position information. The method includes determining a position of a lane line in a map according to the third position information and relative position information. 1. A method for positioning a lane line , comprising:obtaining inertial information, target traveling information, and first position information of a vehicle, the inertial information comprising information measured by an inertial measurement unit of the vehicle, the target traveling information comprising traveling information of the vehicle acquired at a first moment, and the first position information comprising a position of the vehicle at the first moment;determining second position information according to the target traveling information and the first position information;determining third position information of the vehicle at a second moment based on the inertial information of the vehicle and the second position information, the second moment being later than the first moment; anddetermining a position of a lane line in a map according to the third position information and relative position information, the relative position information indicating a relative position between a detected lane line and the vehicle.2. The ...

Подробнее
24-01-2019 дата публикации

CMOS IMAGE SENSOR ON-DIE MOTION DETECTION USING INTER-PIXEL MESH RELATIONSHIP

Номер: US20190026901A1
Принадлежит:

Techniques for motion detection are presented. An image sensor for motion detection includes a plurality of analog comparators and a two-dimensional pixel array including a plurality of rows of pixels and a plurality of columns of pixels. Each pixel is configured to convert an optical signal on the pixel into an analog signal. The two-dimensional pixel array is organized into a plurality of groups of pixels each associated with a combined group signal determined based on the analog signals from pixels in the group of pixels. Each analog comparator includes two inputs and is used to compare combined group signals generated by two groups of pixels of the plurality of groups of pixels during a same time period to generate a 1-bit inter-pixel digital signal, where each of the two groups of pixels is coupled to a corresponding input of the two inputs of the each analog comparator. 1. An image sensor comprising: each pixel is configured to convert an optical signal on the pixel into an analog signal; and', 'the two-dimensional pixel array is organized into a plurality of groups of pixels, each group of pixels associated with a combined group signal determined based on the analog signals from pixels in the group of pixels; and, 'a two-dimensional pixel array characterized by a plurality of rows of pixels and a plurality of columns of pixels, whereina plurality of analog comparators, each analog comparator comprising two inputs and configured to compare combined group signals generated by two groups of pixels of the plurality of groups of pixels during a same time period to generate a 1-bit inter-pixel digital signal, each of the two groups of pixels coupled to a corresponding input of the two inputs of the each analog comparator.2. The image sensor of claim 1 , wherein the combined group signal generated by a group of pixels comprises a sum or an average of the analog signals generated by pixels in the group of pixels.3. The image sensor of claim 1 , further comprising a ...

Подробнее
28-01-2021 дата публикации

METHOD AND APPARATUS FOR METHOD FOR DYNAMIC MULTI-SEGMENT PATH AND SPEED PROFILE SHAPING

Номер: US20210026358A1
Принадлежит: GM GLOBAL TECHNOLOGY OPERATIONS LLC

The present application relates to determining a location of an object in response to a sensor output, generating a first vehicle path in response to the location of the object and a map data, determining an undrivable area within the first vehicle path, generating a waypoint outside of the undrivable area, generating a second vehicle path from a first point on the first vehicle path to the waypoint and a third vehicle path from the waypoint to a second point on the first vehicle path such that the second vehicle path and the third vehicle path are outside of the undrivable area, generating a control signal in response to the second vehicle path, the third vehicle path and and controlling a vehicle in response to the control signal such that the vehicle follows the second vehicle path and the third vehicle path. 1. An apparatus comprising:a sensor operative to detect an object within a field of view;a vehicle controller operative to control a vehicle in response to a control signal;a memory operative to store a map data; anda processor for generating a first vehicle path in response to the object and the map data, for determining an undrivable area within the first vehicle path, for generating a waypoint outside of the undrivable area, for generating a second vehicle path from a first point on the first vehicle path to the waypoint and a third vehicle path from the waypoint to a second point on the first vehicle path such that the second vehicle path and the third vehicle path are outside of the undrivable area, predicting a first lateral acceleration in response to the second vehicle path and a second lateral acceleration in response to the third vehicle path and for generating a control signal in response to the second vehicle path, the first lateral acceleration, the third vehicle path and the second lateral acceleration and coupling the control signal to the vehicle controller.2. The apparatus of wherein the sensor includes a camera operative to capture an image ...

Подробнее
28-01-2021 дата публикации

METHOD AND DEVICE FOR FACE SELECTION, RECOGNITION AND COMPARISON

Номер: US20210027045A1
Автор: ZHENG Dandan
Принадлежит:

Methods, systems, and devices, including computer programs encoded on computer storage media, for selecting a target face are provided. One of the methods includes: obtaining at least one facial area including one or more faces in an image taken by a camera; determining, based on the image, a spatial distance between each of the one or more faces and the camera; and selecting, based on the spatial distance, the target face from the one or more faces. 1. A method for selecting a target face , comprising:obtaining at least one facial area including one or more faces in an image taken by a camera;determining, based on the image, a spatial distance between each of the one or more faces and the camera; andselecting, based on the spatial distance, the target face from the one or more faces.2. The method according to claim 1 , wherein the obtaining at least one facial area comprises:determining one or more facial areas comprised in the image in a camera collection region of the camera; andselecting the at least one facial area in the image corresponding to an effective collection region of the camera from the one or more facial areas, wherein the effective collection region is a portion of the camera collection region and has a same corresponding relationship with the camera collection region as the corresponding relationship between an image in a reduced field of view of the camera with the image, the reduced field of view of the camera being obtained after an original resolution of the camera is reduced by a predetermined ratio.3. The method according to claim 1 , wherein the spatial distance comprises at least one or any combination of the following:a depth-dimension distance formed by a first projection of a distance between the face and the camera in a depth direction of a coordinate system, the depth direction being a first direction perpendicular to an imaging region of the camera;a horizontal-dimension distance formed by a second projection of the distance between ...

Подробнее
28-01-2021 дата публикации

DEFINING BOUNDARY FOR DETECTED OBJECT

Номер: US20210027075A1
Принадлежит: FORD GLOBAL TECHNOLOGIES, LLC

A computer includes a processor and a memory storing instructions executable by the processor to receive data from a sensor specifying a plurality of points, the points including a plurality of first points that describe an object; define a boundary surrounding the first points while minimizing a volume of space that is both (i) contained by the boundary and (ii) identified as unoccupied; and actuate a component with respect to a vehicle based on the boundary. 1. A computer comprising a processor and a memory storing instructions executable by the processor to:receive data from a sensor specifying a plurality of points, the points including a plurality of first points that describe an object;define a boundary surrounding the first points while minimizing a volume of space that is both (i) contained by the boundary and (ii) identified as unoccupied; andactuate a component with respect to a vehicle based on the boundary.2. The computer of claim 1 , wherein the points are described in the sensor data as three-dimensional points claim 1 , and the instructions further include to project the points into two-dimensional horizontal space before defining the boundary.3. The computer of claim 1 , wherein the instructions further include to identify space as unoccupied upon determining that the sensor has an unobstructed view through the space to one of the points.4. The computer of claim 1 , wherein the boundary is a rectangular bounding box.5. The computer of claim 1 , wherein the instructions further include to generate a convex hull surrounding the first points claim 1 , and to define the boundary as a rectangular bounding box based on the convex hull.6. The computer of claim 5 , whereinthe instructions further include to generate a plurality of candidate rectangular bounding boxes based on the convex hull;defining the boundary includes selecting a first candidate rectangular bounding box from the candidate rectangular bounding boxes; andof the candidate rectangular ...

Подробнее
28-01-2021 дата публикации

SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR MANAGING POSITION OF TARGET

Номер: US20210027481A1
Принадлежит: OBAYASHI CORPORATION

A system for managing a position of a target stores identification information for identifying a target to be managed in association with position information indicating a position of the target. The system further obtains an image from an image capture device attached to a mobile device and obtains the image captured by the image capture device at an image capture position and image capture position information indicating the image capture position. The system further locates the position of the target included in the image using the image capture position information. The system further stores the position of the target in association with the identification information of the target. 1. A system for managing a position of a target , the system comprising circuitry configured to:store identification information for identifying a target to be managed in association with position information indicating a position of the target;obtain an image from an image capture device attached to a mobile device;obtain the image captured by the image capture device at an image capture position and image capture position information indicating the image capture position;locate the position of the target included in the image using the image capture position information; andstore the position of the target in association with the identification information of the target.2. The system according to claim 1 , wherein the circuitry is further configured to:determine a usage state of the target based on the image; andstore the usage state in association with the identification information and the position of the target.3. The system according to claim 1 , wherein the circuitry is further configured to:store the image capture position as the position of the mobile device.4. The system according to claim 3 , wherein the circuitry is further configured to stop image capture by the image capture device when the position of the mobile device is included in an image capture prohibited region. ...

Подробнее
30-01-2020 дата публикации

Trailer Cargo Monitoring Apparatus for a Vehicle

Номер: US20200031284A1
Автор: ONICA Dan
Принадлежит:

A trailer cargo monitoring apparatus () for a vehicle () includes a sensor arrangement () and a calculation unit (). The sensor arrangement () is configured to capture image data of a cargo () on or within a trailer () coupled to the vehicle (). The calculation unit () is configured to analyze the captured image data and therefrom to determine an actual position of the cargo () to monitor the cargo (). The calculation unit () is further configured to inform the driver of the vehicle (), when the actual position of the cargo () differs from a predefined position by more than a predefined threshold. 1. A trailer cargo monitoring apparatus for a vehicle , comprising:a sensor arrangement; anda calculation unit;wherein the sensor arrangement is configured to capture actual image data of a cargo within a trailer coupled to the vehicle;wherein the calculation unit is configured to analyze the actual image data and based thereon to determine an actual position of the cargo so as to monitor the cargo,wherein the calculation unit is configured to compare the actual position of the cargo with a predefined position, andwherein the calculation unit is configured to inform a driver of the vehicle when the actual position of the cargo differs from the predefined position by more than a predefined threshold.2. The trailer cargo monitoring apparatus according to claim 1 , wherein the sensor arrangement is arranged at a rear end of the vehicle facing toward the trailer.3. The trailer cargo monitoring apparatus according to claim 1 ,wherein the sensor arrangement is configured to capture reference image data of the cargo on or within the trailer before or when the vehicle starts driving, andwherein the calculation unit is configured to determine the predefined position from the reference image data, and to determine a position shift of the cargo by the comparing of the actual position with the predefined position.4. The trailer cargo monitoring apparatus according to claim 1 , wherein ...

Подробнее
01-02-2018 дата публикации

DISPLAY APPARATUS

Номер: US20180032830A1
Принадлежит: LG ELECTRONICS INC.

A display apparatus is disclosed. The display apparatus includes: a display unit configured to display an image; an input unit configured to receive an input from a user; and a controller configured to display a thumbnail image corresponding to a first region of an omnidirectionally captured image and acquire one or more images respectively corresponding to one or more regions which are different from the first region in the omnidirectionally captured image based on a type of the input. 1. A display apparatus comprising:a display unit configured to display an image;an input unit configured to receive an input from a user; anda controller configured to control the display unit to display a thumbnail image corresponding to a first region of an omnidirectionally captured image and to acquire, in response to the input, one or more images respectively corresponding to one or more regions which are different from the first region in the omnidirectionally captured image.2. The display apparatus of claim 1 , wherein the controller is configured to control the display unit to display claim 1 , in response to the input claim 1 , one or more thumbnail images respectively corresponding to the one or more images.3. The display apparatus of claim 2 , wherein the controller is configured to control the display unit to display a gallery including the thumbnail image corresponding to the first region of the omnidirectionally captured image and a thumbnail image of a general image claim 2 , and claim 2 , in response to the input claim 2 , to additionally display the one or more thumbnail images in the gallery.4. The display apparatus of claim 2 , wherein claim 2 , if the input is a first type input claim 2 , the controller is configured to control the display unit to display one or more thumbnail images respectively corresponding to one or more regions in a same horizontal plane of the omnidirectionally captured image as the first region claim 2 ,if the input is a second type input, ...

Подробнее
01-02-2018 дата публикации

LIGHT LINE IMAGER-BASED IC TRAY POCKET DETECTION SYSTEM

Номер: US20180033137A1
Принадлежит: Delta Design, Inc.

A system for detecting a status of a pocket of a tray includes a tray having a plurality of pockets that hold an integrated circuit device, a vision mechanism, a light line generator, a reflective device, and a controller. The vision mechanism images the tray along a first optical axis. The light line generator emits a light line along a second optical axis. The reflective device reflects the light line onto the tray along a third optical axis. The third optical axis has a different angle relative to the first optical axis than an angle between the first optical axis and the second optical axis. The controller receives an image of the tray from the vision mechanism, detects the light line reflected onto the tray along the third optical axis, and determines a status of a pocket based on the detected light line along the third optical axis. 1. A system for detecting a status of a pocket of a tray , the system comprising:a tray comprising a plurality of pockets, each of the plurality of pockets being configured to hold an integrated circuit device;a vision mechanism configured to image the tray along a first optical axis;a light line generator configured to emit a light line along a second optical axis;a reflective device configured to reflect the light line onto the tray along a third optical axis, the third optical axis having a different angle relative to the first optical axis than an angle between the first optical axis and the second optical axis; and receive an image of the tray from the vision mechanism;', 'detect the light line reflected onto the tray along the third optical axis; and', 'determine a status of a pocket based on the detected light line along the third optical axis., 'a controller configured to2. The system of claim 1 , wherein the plurality of pockets is arranged in a plurality of rows and a plurality of columns and the light line along the third optical axis is reflected along a row of the plurality of rows.3. The system of claim 2 , wherein ...

Подробнее
17-02-2022 дата публикации

AUTOMATED LICENSE PLATE RECOGNITION SYSTEM AND RELATED METHOD

Номер: US20220051042A1
Принадлежит:

Systems, methods, devices and computer readable media for determining a geographical location of a license plate are described herein. A first image of a license plate is acquired by a first image acquisition device of a camera unit and a second image of the license plate is acquired by a second image acquisition device of the camera unit. A three-dimensional position of the license plate relative to the camera unit is determined based on stereoscopic image processing of the first image and the second image. A geographical location of the camera unit is obtained. A geographical location of the license plate is determined from the three-dimensional position of the license plate relative to the camera unit and the geographical location of the camera unit. Other systems, methods, devices and computer readable media for detecting a license plate and identifying a license plate are described herein. 1. An automated license plate recognition system comprising:{'claim-text': ['a first image acquisition device for acquiring at least a first image of a license plate; and', 'a second image acquisition device for acquiring at least a second image of the license plate;'], '#text': 'a camera unit comprising:'}at least one processing unit; and{'claim-text': ['obtaining the first image and the second image of the license plate;', 'determining a three-dimensional position of the license plate relative to the camera unit based on stereoscopic image processing of the first image and the second image;', 'obtaining a geographical location of the camera unit;', 'determining a geographical location of the license plate from the three-dimensional position of the license plate relative to the camera unit and the geographical location of the camera unit; and', 'outputting the geographical location of the license plate.'], '#text': 'at least one non-transitory computer-readable memory having stored thereon program instructions executable by the at least one processing unit for:'}2. The ...

Подробнее
17-02-2022 дата публикации

IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND PROGRAM

Номер: US20220051387A1
Автор: Kobayashi Hiroyuki
Принадлежит: NEC Corporation

An image processing apparatus includes an abnormal image generator that generates an abnormal image by inserting abnormal data into a learning normal image based on priorities set for each abnormal data, a priority setting unit that inputs the abnormal image to a model that has learned to eliminate the abnormal data from the abnormal image and newly sets the priority of the abnormal data inserted into the learning normal image based on the difference between an output image outputted from the model and the learning normal image, and a learning unit that learns the model so that the difference between the output image and leaning normal image is reduced. 1. An image processing method comprising:generating an abnormal image by inserting abnormal data into a learning normal image based on priorities set for each abnormal data;inputting the abnormal image to a model that has learned to eliminate the abnormal data from the abnormal image and newly setting the priority of the abnormal data inserted into the learning normal image based on a difference between an output image outputted from the model and the learning normal image; andlearning the model so that the difference between the output image and the learning normal image is reduced.2. The image processing method according to claim 1 , whereinthe abnormal image is generated by more preferentially inserting the abnormal data as the priority of the abnormal data has a larger value, andthe priority of the abnormal data inserted into the learning normal image is newly set such that the priority has a larger value as the difference between the output image and the learning normal image is larger.3. The image processing method according to claim 2 , whereinthe priority of the abnormal data inserted into the learning normal image is newly set by setting a priority factor having a larger value as the difference between the output image and the learning normal image is larger and multiplying the priority by the priority ...

Подробнее
01-02-2018 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: US20180035235A1
Автор: Funakoshi Masanobu
Принадлежит:

An information processing apparatus acquires information about designation of a position of a virtual viewpoint related to a virtual viewpoint image generated based on image capturing by a plurality of cameras, and decides, based on the information, a virtual listening point for generating an audio signal based on sound pickup at a plurality of sound pickup points. 1. An information processing apparatus comprising:an acquisition unit configured to acquire information about designation of a position of a virtual viewpoint related to a virtual viewpoint image generated based on image capturing by a plurality of cameras; anda decision unit configured to decide, based on the information acquired by the acquisition unit, a virtual listening point for generating an audio signal based on sound pickup at a plurality of sound pickup points.2. The apparatus according to claim 1 , wherein the acquisition unit acquires the information about the designation of the position of the virtual viewpoint and information about designation of a line-of-sight direction of the virtual viewpoint.3. The apparatus according to claim 1 , wherein the decision unit decides a position of the virtual listening point based on the information acquired by the acquisition unit.4. The apparatus according to claim 3 , wherein the decision unit decides a listening direction of the virtual listening point based on the information acquired by the acquisition unit.5. The apparatus according to claim 1 , wherein the decision unit decides claim 1 , as the position of the virtual listening point claim 1 , almost the same position as the position of the virtual viewpoint specified from the information acquired by the acquisition unit.6. The apparatus according to claim 1 , wherein the decision unit decides claim 1 , as the position of the virtual listening point claim 1 , a position away claim 1 , in the line-of-sight direction of the virtual viewpoint claim 1 , from the position of the virtual viewpoint ...

Подробнее
31-01-2019 дата публикации

DATA ENTRY FROM SERIES OF IMAGES OF A PATTERNED DOCUMENT

Номер: US20190034717A1
Принадлежит:

The present disclosures provide methods of optical character recognition for a patterned document having one static element and one information field. Systems and methods are disclosed to identify in each of a current and a previous image of a series of images of an original document overlapping with each other, a corresponding plurality of base points, wherein each base point is associated with one textural artifact in each of the current image and the previous image using an OCR text of the current image; identify parameters of a coordinate transformation converting coordinates of the previous image into coordinates of the current image; associate a part of the OCR text with a cluster of a plurality of clusters of symbol sequences; identify a median string representing the cluster of symbol sequences; and produce a resulting OCR text representing at least a portion of the original document. 1. A method , comprising:identifying, by a processing device, in each of a current image and a previous image of a series of images of an original document wherein the current image at least partially overlaps with the previous image, a corresponding plurality of base points, wherein each base point is associated with at least one textural artifact of a plurality of textual artifacts in each of the current image and the previous image using an OCR text of the current image;identifying, using coordinates of matching base points in the current image and the previous image, parameters of a coordinate transformation converting coordinates of the previous image into coordinates of the current image;associating, using the coordinate transformation, at least part of the OCR text with a cluster of a plurality of clusters of symbol sequences, wherein the symbol sequences are produced by processing one or more previously received images of the series of images;identifying, for each cluster, a median string representing the cluster of symbol sequences; andproducing, using the median ...

Подробнее
31-01-2019 дата публикации

OUTPUT CONTROL DEVICE, INFORMATION OUTPUT SYSTEM, OUTPUT CONTROL METHOD, AND PROGRAM

Номер: US20190035105A1
Принадлежит: NEC Corporation

An output control device includes a determination unit configured to determine whether or not a person is a specific person, a processing unit configured to acquire position information of the person, and an output control unit configured to cause a first output device located in the vicinity of the person to output information according to the person on the basis of personal information about the person determined to be the specific person by the determination unit and the position information acquired by the processing unit, wherein the output control unit is configured to cause a second output device to output the information output by the first output device in continuation with the output of the first output device if the person has moved from the vicinity of the first output device to the vicinity of the second output device. 1. An output control device , comprising:a determination unit configured to determine whether or not a person is a specific person;a processing unit configured to acquire position information of the person; andan output control unit configured to cause a first output device located in the vicinity of the person to output information according to the person on a basis of personal information about the person determined to be the specific person by the determination unit and the position information acquired by the processing unit,wherein the output control unit is configured to cause a second output device to output the information output by the first output device in continuation with the output of the first output device if the person has moved from the vicinity of the first output device to the vicinity of the second output device.2. The output control device according to claim 1 ,wherein the information changes over time, andwherein the output control unit is configured to cause the second output device to output the information output by the first output device from a first reproduction position according to a second reproduction ...

Подробнее
31-01-2019 дата публикации

ANALYSIS APPARATUS, ANALYSIS METHOD, AND STORAGE MEDIUM

Номер: US20190035106A1
Принадлежит: NEC Corporation

Provided is an analysis apparatus () including a person extraction unit () that analyzes video data to extract a person, a time calculation unit () that calculates a continuous appearance time period for which the extracted person has been continuously present in a predetermined area and a reappearance time interval until the extracted person reappears in the predetermined area for each extracted person, and an inference unit () that infers a characteristic of the extracted person on the basis of the continuous appearance time period and the reappearance time interval. 1. An analysis apparatus comprisinga processor configured to:analyze video data to extract a person;calculate a time period for which the extracted person has been continuously present in a predetermined area and a time interval between first point in time when the extracted person disappears from the predetermined area and second point in time when the extracted person reappears to the predetermined area; andinfer a characteristic of the extracted person on the basis of the time period and the time interval.2. The analysis apparatus according to claim 1 ,wherein the processor is configured to infer the characteristic of the person on the basis of a relationship between the time period and the time interval.3. The analysis apparatus according to claim 1 ,wherein the processor is configured to:count the number of times each characteristic is inferred, the characteristic being inferred in correspondence with each person; andcalculate reliability of the inferred characteristic on the basis of the number of times each characteristic is inferred, the characteristic being inferred in correspondence with a certain person.4. The analysis apparatus according to claim 1 ,wherein the processor is configured to infer the characteristic of the person on the basis of correspondence information in which a pair of the time period and the time interval is associated with a characteristic.5. The analysis apparatus ...

Подробнее
30-01-2020 дата публикации

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

Номер: US20200034636A1
Автор: NISHIMURA Kazuya
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

An information processing apparatus includes: a first acquisition unit configured to acquire a plurality of photographed images with photographing location information of the plurality of photographed images; an extraction unit configured to extract, from the photographed images, a plurality of object images containing an object preset as an extraction object; a clustering unit configured to cluster the object images into a plurality of clusters; a second acquisition unit configured to acquire, from map information, at least one name of at least one facility present around each photographing location of the object images; and an application unit configured to apply, to the object images belonging to an intended cluster included in the clusters, a label of a specific name of a specific facility satisfying an application condition among the at least one name of the at least one facility acquired by the second acquisition unit for the object images. 1. An information processing apparatus comprising:a first acquisition unit configured to acquire a plurality of photographed images together with photographing location information of the plurality of photographed images;an extraction unit configured to extract, from the photographed images, a plurality of object images containing an object preset as an extraction object;a clustering unit configured to cluster the object images into a plurality of clusters;a second acquisition unit configured to acquire, from map information, at least one name of at least one facility present around each photographing location of the object images; andan application unit configured to apply, to the object images belonging to an intended cluster included in the clusters, a label of a specific name of a specific facility, the specific name of the specific facility satisfying an application condition among the at least one name of the at least one facility acquired by the second acquisition unit for the object images.2. The information ...

Подробнее
30-01-2020 дата публикации

METHOD FOR CLASSIFYING A TRAFFIC SIGN, OR ROAD SIGN, IN AN ENVIRONMENT REGION OF A MOTOR VEHICLE, COMPUTATIONAL APPARATUS, DRIVER ASSISTANCE SYSTEM AND MOTOR VEHICLE

Номер: US20200034641A1
Автор: Sergeev Nikolai
Принадлежит: VALEO SCHALTER UND SENSOREN GMBH

The invention relates to a method for classifying a traffic sign () in an environment region () of a motor vehicle () as a traffic sign sticker () located on an industrial or commercial vehicle () or as a stationary traffic sign (), wherein in the method at least one first image () of the environment region (), captured by a camera () of the motor vehicle (), is received and the traffic sign () is recognized in the at least one first image (), wherein a geometric dimension (D′, D′) of the traffic sign () in the first image () is determined on the basis of said first image (), a first reference dimension (Dmin, Dmax), which is characteristic of a stationary traffic sign (), is prescribed for the captured traffic sign (), a first position (Pmin, Pmax) of the traffic sign () in the environment region () is estimated based on the geometric dimension (D′, D′) of the traffic sign () in the first image () and on the basis of the first reference dimension (Dmin, Dmax), and the traffic sign () is classified as the traffic sign sticker () or as the stationary traffic sign () based on the estimated first position (Pmin, Pmax). The invention additionally relates to a computational apparatus (), a driver assistance system () and a motor vehicle (). 1. A method for classifying a traffic sign in an environment region of a motor vehicle as one of a traffic sign sticker located on an industrial or commercial vehicle or as a stationary traffic sign , the method comprising:receiving at least one first image of the environment region, captured by a camera of the motor vehicle;detecting the traffic sign in the at least one first image;determining a geometric dimension of the traffic sign in the first image on the basis of said first image;prescribing a first reference dimension characteristic of a stationary traffic sign for the captured traffic sign;estimating a first position of the traffic sign in the environment region based on the geometric dimension of the traffic sign in the ...

Подробнее
30-01-2020 дата публикации

FACE DIRECTION ESTIMATION DEVICE AND FACE DIRECTION ESTIMATION METHOD

Номер: US20200034981A1
Автор: TORAMA Ryosuke
Принадлежит: Mitsubishi Electric Corporation

A face direction estimation device () includes a face image acquiring unit () acquiring a shot face image, a face detecting unit () detecting the face position in the face image, a face organ detecting unit () detecting face organs in the detected face position, a switching determining unit () evaluating the detected face organs, and switching between first and second face direction estimating methods in accordance with the evaluation, a first face direction estimating unit () estimating the face direction in accordance with a positional relationship among the detected face organs when the switching determining unit switches to the first face direction estimating method, and a second face direction estimating unit () calculating a face movement amount on the basis of the detected face position and estimating the face direction in accordance with the movement amount when the switching determining unit switches to the second face direction estimating method. 17-. (canceled)8. A face direction estimation device comprising:a processor to execute a program; anda memory to store the program which, when executed by the processor, performs processes of,acquiring a face image generated by shooting an image of a face;detecting a position of the face from the face image;detecting face organs from the face image in the position of the face;performing evaluation of the face organs, and switching between a first face direction estimating method and a second face direction estimating method on a basis of a result of the evaluation;estimating a direction of the face on a basis of a positional relationship among the face organs when the switching is performed to be switched to the first face direction estimating method; andcalculating an amount of movement of the face on a basis of the position of the face and estimating the face direction on a basis of the amount of movement when the switching is performed to be switched to the second face direction estimating method, wherein the ...

Подробнее
04-02-2021 дата публикации

MONITORING METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM

Номер: US20210034881A1
Принадлежит:

Embodiments of a surveillance method, apparatus, system, electronic device and computer-readable storage medium are provided. In the surveillance method, a non-visible light image and a target image are obtained, wherein the target image is generated from a visible light signal captured during a capture period of the non-visible light image. The method then detects whether an object is present in the non-visible light image. When an object is detected in the non-visible light image, a second location area of the object in a visible light image is determined according to a first location area of the object in the non-visible light image, such that surveillance of the object is implemented based on the visible light image, wherein the visible light image is an image determined based on the target image. Compared to the relevant art, the embodiments proposes to use the result of the object detection performed on the non-visible light image to determine the result of the object detection performed for the visible light image corresponding to the non-visible light image, which guarantees that the object detection result of the visible light image also has a high accuracy and thereby guarantees the effect of intelligent surveillance. 1. A surveillance method , comprising:obtaining a non-visible light image and a target image, wherein the target image is generated from a visible light signal captured during a capture period of the non-visible light image;detecting an object in the non-visible light image; andwhen an object is detected in the non-visible light image, determining a second location area of the object in a visible light image according to a first location area of the object in the non-visible light image, such that surveillance of the object is implemented based on the visible light image, wherein the visible light image is an image determined based on the target image.2. The method according to claim 1 , wherein the visible light image is the target image; ...

Подробнее
04-02-2021 дата публикации

METHODS AND APPARATUS TO COUNT PEOPLE IN IMAGES

Номер: US20210034883A1
Принадлежит:

Example apparatus disclosed herein include a memory and a processor to execute instructions to identify a first set of face rectangles and a second set of face rectangles in a frame pair of image data corresponding to a media environment, the first set of face rectangles corresponding to a first image sensor and the second set of face rectangles corresponding to a second image sensor, remove first face rectangles from the first set of face rectangles and the second set of face rectangles when the first face rectangles are determined to correspond to false positive face detections, group second face rectangles that remain in the first set of face rectangles and the second set of face rectangles after removal of the first face rectangles to form groups of face rectangles, and generate a count of people identified in the media environment based on a number of the groups. 1. An apparatus comprising:memory; and identify a first set of face rectangles and a second set of face rectangles in a frame pair of image data corresponding to a media environment, the first set of face rectangles corresponding to a first image sensor and the second set of face rectangles corresponding to a second image sensor;', 'remove first ones of the face rectangles from the first set of face rectangles and the second set of face rectangles when the first ones of the face rectangles are determined to correspond to false positive face detections;', 'group second ones of the face rectangles that remain in the first set of face rectangles and the second set of face rectangles after removal of the first ones of the face rectangles to form groups of face rectangles; and', 'generate a count of people identified in the media environment based on a number of the groups., 'a processor to execute instructions to2. The apparatus of claim 1 , wherein the processor is to remove an overlap face rectangle from the second set of face rectangles when the overlap face rectangle is located in an overlap region of ...

Подробнее
08-02-2018 дата публикации

SYSTEMS AND METHODS FOR MONITORING UNMANNED VEHICLES

Номер: US20180039838A1
Принадлежит:

Aspects relate to methods, systems, and devices for monitoring unmanned vehicles. Methods include receiving, by a processor, a captured image of an observed unmanned vehicle, the captured image including measured data, comparing the measured data with an unmanned vehicle database, determining a status of the observed unmanned vehicle, and generating an indicator regarding the status of the observed unmanned vehicle. 1. A computer implemented method for monitoring unmanned vehicles , the method comprising:receiving, by a processor, a captured image of an observed unmanned vehicle, the captured image including measured data;comparing the measured data with an unmanned vehicle database;determining a status of the observed unmanned vehicle; andgenerating an indicator regarding the status of the observed unmanned vehicle.2. The computer implemented method of claim 1 , further comprising capturing the captured image of the observed unmanned vehicle with an image capture device.3. The computer implemented method of claim 1 , wherein the unmanned vehicle database comprises unmanned vehicle identification information claim 1 , registration information claim 1 , registered routing information claim 1 , ownership and/or operator information.4. The computer implemented method of claim 1 , wherein the measured data comprises a measured distance and a measured angle.5. The computer implemented method of claim 1 , wherein claim 1 , when the status of the observed unmanned vehicle is that the observed unmanned vehicle is unauthorized claim 1 , the method further comprises generating a report of an unauthorized unmanned vehicle.6. The computer implemented method of claim 1 , wherein claim 1 , when the status of the observed unmanned vehicle is that the observed unmanned vehicle is authorized claim 1 , the method further comprises generating a notification that the observed unmanned vehicle is authorized.7. The computer implemented method of claim 1 , wherein the processor is a ...

Подробнее
04-02-2021 дата публикации

IMAGE CAPTURE DEVICE WITH EXTENDED DEPTH OF FIELD

Номер: US20210037187A1
Принадлежит:

An image capture device having a first integrated sensor lens assembly (ISLA), a second ISLA, and an image processor is disclosed. The first and second ISLAs may each include a respective optical element that have different depths of field. The first and second ISLAs may each include a respective image sensor configured to capture respective images. The image processor may be electrically coupled to the first ISLA and the second ISLA. The image processor may be configured to obtain a focused image based on a first image and a second image. The focused image may have an extended depth of field. The extended depth of field may be based on the depth of field of each respective optical element. 1. An image capture device comprising: a first optical element; and', 'a first image sensor of a first type configured to capture a first image via the first optical element, wherein the first image sensor has a first depth of field based on a distance between the first optical element and the first image sensor;, 'a first integrated sensor lens assembly (ISLA) comprising a second optical element; and', 'a second image sensor of a second type, wherein the second type is different than the first type, the second image sensor configured to capture a second image via the second optical element, wherein the second image sensor has a second depth of field based on a distance between the second optical element and the second optical sensor, wherein the second depth of field that is less than the first depth of field; and, 'a second ISLA comprisingan image processor electrically coupled to the first ISLA and the second ISLA, the image processor configured to obtain a focused image in a low light condition based on the first image and the second image, wherein the focused image has an extended depth of field based on the first depth of field and the second depth of field.2. The image capture device of claim 1 , wherein the first depth of field is from about 0.6 m to infinity.3. The image ...

Подробнее
24-02-2022 дата публикации

SERVER, ELECTRONIC DEVICE, AND CONTROL METHODS THEREFOR

Номер: US20220058375A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A server and an electronic device for identifying a fake image are provided. The server includes a memory storing an artificial intelligence model trained to identify a fake image; and a processor connected to the memory, and configured to identify whether an image is the fake image by inputting the image to the artificial intelligence model, wherein the artificial intelligence model is a model trained based on an original image and a sample fake image, each including information about a landmark of a face area. 1. A server comprising:a memory storing an artificial intelligence model trained to identify a fake image; anda processor connected to the memory configured to identify whether an image is the fake image by inputting the image to the artificial intelligence model,wherein the artificial intelligence model is a model trained based on an original image and a sample fake image, each including information about a landmark of a face area.2. The server of claim 1 , wherein the artificial intelligence model is trained based on the original image in which a first pixel value of a first pixel corresponding to a first landmark in a first face area included in the original image is adjusted to a first predetermined pixel value and based on the sample fake image in which a second pixel value of a second pixel corresponding to a second landmark in a second face area included in the sample fake image is adjusted to a second predetermined pixel value.3. The server of claim 1 , wherein the artificial intelligence model trained based on at least one of a range of a color value of the face area included in each of a plurality of original images and a plurality of sample fake images claim 1 , and a difference in a brightness value between a forehead area of the face area and a cheek area of the face area.4. The server of claim 1 , wherein the artificial intelligence model is one of a plurality of artificial intelligence models stored in the memory claim 1 ,wherein each of the ...

Подробнее
24-02-2022 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: US20220058691A1
Автор: ONOUE Yohji, Saiki Ryo
Принадлежит:

An information processing apparatus for distributing distribution information to a user apparatus, the information processing apparatus executing an information processing method comprising: specifying, based on image capturing data acquired by an image capturing device of a vehicle, a state of a person associated with the user apparatus at the time of image capturing; and distributing, to the user apparatus, distribution information selected from a plurality of pieces of distribution information based on the specified state of the person. 1. An information processing apparatus for distributing distribution information to a user apparatus , the information processing apparatus executing an information processing method comprising:specifying, based on image capturing data acquired by an image capturing device of a vehicle, a state of a person associated with the user apparatus at the time of image capturing; anddistributing, to the user apparatus, distribution information selected from a plurality of pieces of distribution information based on the specified state of the person.2. The apparatus according to claim 1 , wherein the information processing method further comprisesspecifying a vehicle that instructs to perform image capturing of the person associated with the user apparatus, andinstructing the specified vehicle to perform image capturing.3. The apparatus according to claim 2 , whereinthe information processing method further comprises acquiring information concerning a network to which the vehicle and the user apparatus are connected, andin the instructing, the vehicle connected to the same network as the network of the user apparatus is instructed to perform image capturing.4. The apparatus according to claim 3 , wherein the state of the person includes at least one of an operation executed by the person claim 3 , a facial expression of the person claim 3 , clothing of the person claim 3 , and the number of people acting with the person.5. The apparatus ...

Подробнее
06-02-2020 дата публикации

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM

Номер: US20200042803A1
Автор: YAMAGUCHI Takuya
Принадлежит:

An information processing method includes: acquiring a first object detection result obtained by use of an object detection model to which sensing data from a first sensor is input, and a second object detection result obtained by use of a second sensor; determining a degree of agreement between the first object detection result and the second object detection result in a specific region in a sensing space of the first sensor and the second sensor; and selecting the sensing data as learning data for the object detection model, according to the degree of agreement obtained in the determining. 1. An information processing method , comprising:acquiring a first object detection result obtained by use of an object detection model to which sensing data from a first sensor is input, and a second object detection result obtained by use of a second sensor;determining a degree of agreement between the first object detection result and the second object detection result in a specific region in a sensing space of the first sensor and the second sensor; andselecting the sensing data as learning data for the object detection model, according to the degree of agreement obtained in the determining.2. The information processing method according to claim 1 , further comprising:selecting the second object detection result as correct data for learning the object detection model, according to the degree of agreement.3. The information processing method according to claim 1 , whereinthe specific region is further a region that is in accordance with an object to be detected in the object detection model.4. The information processing method according to claim 3 , whereinthe object to be detected is a vehicle, andthe region that is in accordance with the object to be detected is a region corresponding to a road in the sensing space.5. The information processing method according to claim 3 , whereinthe object to be detected is a person, andthe region that is in accordance with the object to ...

Подробнее
15-02-2018 дата публикации

ELEMENT PROVIDED WITH PORTION FOR POSITION DETERMINATION AND MEASURING METHOD

Номер: US20180045603A1
Принадлежит:

A method for measuring a position of a target surface provided with portions for position determination thereon, wherein a diffuse reflectance of the target surface is 0.1% or less, and a diffuse reflectance of the portions for position determination is 5% or more, and wherein the target surface is configured such that a tangential plane at any point on the target surface where each of the portions for position determination is installed forms an arbitrary angle between 15 degrees and 75 degrees inclusive with a certain direction, the method including the steps of illuminating the target surface with parallel light in the certain direction; determining positions of border lines of the plural portions for position determination from an image of the target surface; and determining the position of the target surface from the positions of the border lines of the plural portions for position determination. 1. A method for measuring a position of a target surface provided with portions for position determination thereon , wherein a diffuse reflectance of the target surface is 0.1% or less , and a diffuse reflectance of the portions for position determination is 5% or more , andwherein the target surface is configured such that a normal to a tangential plane at any point on the target surface where each of the portions for position determination is installed forms an arbitrary angle between 15 degrees and 75 degrees inclusive with a certain direction,the method including the steps of:illuminating the target surface with parallel light in the certain direction;determining positions of border lines of the plural portions for position determination from an image of the target surface; anddetermining the position of the target surface from the positions of the border lines of the plural portions for position determination.2. A method according to claim 1 , wherein the target surface is a surface of an element provided with a first plane and a second plane forming an angle ...

Подробнее
07-02-2019 дата публикации

ELECTRONIC DEVICE AND METHOD FOR CONTROLLING OF THE SAME

Номер: US20190045135A1
Принадлежит: LG ELECTRONICS INC.

The present invention is related to an electronic device and a method for controlling the electronic device. According to the present invention, if a Point Of Interest (POI) to be enlarged is selected from a camera image, resizing is performed with a predetermined resizing speed while the POI is being zoom-processed to be displayed at the center of a screen, and thereby an arbitrary area may be enlarged or decreased, providing an effect of smoothly enlarging or decreasing the POI. 1. An electronic device , comprising:a first camera;a display: anda controller configured to: 'cause the display to display to a zoom image of the preview image when a point of interest (POI) is selected from the preview image, wherein the zoom image is obtained by resizing the POI included in the preview image in a stepwise manner according to a predetermined resizing speed and with respect to a resizing area associated with the POI, wherein the resizing speed is changed according to an input.', 'cause the display to display a preview image obtained from the first camera; and'}2. The electronic device of claim 1 , wherein the controller is further configured to:select the POI in response to a touch point input received at the display while the preview image is displayed; andcause the display to display a guide indicating the selected POI on the preview image.3. The electronic device of claim 2 , wherein the POI comprises a subject selected according to the touch point input.4. The electronic device of claim 3 , wherein the selected subject comprises a moving object claim 3 , and wherein the controller is further configured to change position of the POI by tracking movement of the moving object.5. The electronic device of claim 4 , wherein the controller is further configured to change the resizing speed according to speed of the movement of the moving object.6. The electronic device of claim 4 , wherein the controller is further configured to recalculate the resizing area according to ...

Подробнее
18-02-2021 дата публикации

IMAGE CAPTURE APPARATUS AND CONTROL METHOD THEREOF

Номер: US20210051265A1
Автор: Kimura Masafumi
Принадлежит:

An image capture apparatus detects a subject in a captured image. The image capture apparatus further recognizes its user based on an eyeball image of the user. The image capture apparatus then selects a main subject area from among the detected subject areas, based on information regarding subjects captured in the past and stored being associated with the recognized user. 1. An image capture apparatus comprising:one or more processors, when executing a program stored in a memory, function as:a subject detection unit configured to perform subject detection processing on a captured image;a recognition unit configured to recognize a user of the image capture apparatus based on an eyeball image of the user; anda selection unit configured to select a main subject area from subject areas detected by the subject detection unit, based on information stored being associated with the user recognized by the recognition unit, out of information regarding subjects captured in the past and stored being associated with users.2. The image capture apparatus according to claim 1 , wherein the selection unit is configured to select the main subject area out of the subject areas detected by the subject detection unit and corresponding to the information regarding subjects captured in the past.3. The image capture apparatus according to claim 1 , wherein the one or more processors further function as a line of sight detection unit configured to detect a position in the image at which a user gazes based on the eyeball image.4. The image capture apparatus according to claim 3 , wherein the selection unit is configured to select the main subject area by considering a position detected by the line of sight detection unit in addition to the information regarding subjects captured in the past.5. The image capture apparatus according to claim 3 , wherein the selection unit is configured to select the main subject area claim 3 , based on the information regarding subjects captured in the past ...

Подробнее
14-02-2019 дата публикации

AREA OCCUPANCY DETERMINING DEVICE

Номер: US20190047439A1
Принадлежит:

Various aspects of this disclosure provide an area occupancy determining device. The device may include a memory configured to store at least one occupancy grid of a predetermined region, and a processor. The processor may be configured to generate the occupancy grid of the predetermined region. The occupancy grid includes a plurality of grid cells, each grid cell framed by respective grid cell frame lines. At least some of the grid cells have been assigned an information about the occupancy of the region represented by the respective grid cell. The processor may further be configured to dynamically update the occupancy grid, thereby successively generating a plurality of updated occupancy grids. Each updated occupancy grid is moved relative to the previous occupancy grid such that an origin coordinate of the updated occupancy grid is positioned on a contact point of grid cell frame lines of adjacent grid cells. 1. An area occupancy determining device , the device comprising:a memory configured to store at least one occupancy grid of a predetermined region; and generate the occupancy grid of the predetermined region, the occupancy grid comprising a plurality of grid cells, each grid cell framed by respective grid cell frame lines, and at least some of the grid cells having been assigned an information about the occupancy of the region represented by the respective grid cell; and', 'dynamically update the occupancy grid to successively generate a plurality of updated occupancy grids, wherein each updated occupancy grid is moved relative to the previous occupancy grid such that an origin coordinate of the updated occupancy grid is positioned on a contact point of grid cell frame lines of adjacent grid cells., 'a processor configured to'}2. The device of claim 1 ,wherein the grid cells of the occupancy grid have substantially the same size.3. The device of claim 1 ,wherein the grid cells of the occupancy grid have a cell size in the range from about 5 cm by 5 cm to ...

Подробнее
03-03-2022 дата публикации

METHOD AND APPARATUS FOR EXTRACTING GEOGRAPHIC LOCATION POINTS SPATIAL RELATIONSHIP

Номер: US20220067372A1
Принадлежит:

The present application discloses a method and apparatus for extracting a geographic location point spatial relationship, and relates to the field of big data technologies. A specific implementation solution is as follows: determining geographic location point pairs included in real-scene images by performing signboard recognition on the real-scene images collected by terminal devices; acquiring at least two real-scene images collected by the same terminal device and including the same geographic location point pair; and determining a spatial relationship of the same geographic location point pair by using shooting parameters of the at least two real-scene images. The geographic location point spatial relationship extracted through the present application has higher accuracy and a coverage rate. 1. A method for extracting a geographic location point spatial relationship , comprising:determining geographic location point pairs comprised in real-scene images by performing signboard recognition on the real-scene images collected by terminal devices;acquiring at least two real-scene images collected by the same terminal device and comprising the same geographic location point pair; anddetermining a spatial relationship of the same geographic location point pair by using shooting parameters of the at least two real-scene images.2. The method according to claim 1 , wherein the determining geographic location point pairs comprised in real-scene images by performing signboard recognition on the real-scene images collected by terminal devices comprises:acquiring the real-scene images collected by the terminal devices;performing signboard discrimination on the real-scene images to screen out real-scene images comprising at least two signboards; andperforming signboard text recognition on the real-scene images comprising at least two signboards to determine geographic location point pairs comprised in the real-scene images.3. The method according to claim 2 , wherein before ...

Подробнее
03-03-2022 дата публикации

VEHICLE EXTERNAL ENVIRONMENT RECOGNITION APPARATUS

Номер: US20220067393A1
Автор: OKUBO Toshimi
Принадлежит:

A vehicle external environment recognition apparatus to be applied to a vehicle includes one or more processors and one or more memories configured to be coupled to the one or more processors. The one or more processors are configured to: calculate three-dimensional positions of respective blocks in a captured image; group the blocks to put any two or more of the blocks that have the three-dimensional positions differing from each other within a predetermined range in a group and thereby determine three-dimensional objects; identify each of a preceding vehicle of the vehicle and a sidewall on the basis of the determined three-dimensional objects; and track the preceding vehicle. The one or more processors are configured to determine, upon tracking the preceding vehicle, whether the preceding vehicle to track is to be hidden by the sidewall on the basis of a border line between a blind region and a viewable region.

Подробнее
03-03-2022 дата публикации

MASK WEARING STATUS ALARMING METHOD, MOBILE DEVICE AND COMPUTER READABLE STORAGE MEDIUM

Номер: US20220068109A1
Принадлежит:

A mask wearing status alarming method, a mobile device, and a computer readable storage medium are provided. The method includes: performing a face detection on an image to determine face areas each including a target determined as a face; determining a mask wearing status of the target in each face area; confirming the mask wearing status of the target in each face area using a trained face confirmation model to remove the face areas comprising the target being mistakenly determined as the face and determining a face pose in each of the remaining face areas to remove the face areas with the face pose not meeting a preset condition, in response to determining the mask wearing status as a not-masked-well status or a unmasked status; and releasing an alert corresponding to the mask wearing status of the target in each of the remaining face areas.

Подробнее
25-02-2021 дата публикации

INTERACTIVE ATTRACTION SYSTEM AND METHOD FOR OBJECT AND USER ASSOCIATION

Номер: US20210055793A1
Автор: LIN YU-JEN
Принадлежит:

A system of an amusement park attraction includes an optical sensor configured to detect light and provide optical data based on the detected light and a controller having circuitry communicatively coupled to the optical sensor. The controller is configured to receive the optical data, process the optical data to detect a first movement of a user and a second movement of a handheld or wearable object, detect a correlation between the first movement and the second movement, and associate the handheld or wearable object with the user based on the correlation. 1. A system of an amusement park attraction , the system comprising:an optical sensor configured to detect light and provide optical data based on the detected light; and receive the optical data;', 'process the optical data to detect movement of a plurality of users and movement of a plurality of handheld or wearable objects;', 'identify a first handheld or wearable object and a second handheld or wearable object based on the movement of the plurality of handheld or wearable objects;', 'compare a first movement of the first handheld or wearable object with the movement of the plurality of users to detect a first correlation between the first movement and a second movement of a first user of the plurality of users;', 'compare a third movement of the second handheld or wearable object with the movement of the plurality of users to detect a second correlation between the third movement and a fourth movement of a second user of the plurality of users;', 'determine the first handheld or wearable object is possessed by the first user based on the first correlation; and', 'determine the second handheld or wearable object is possessed by the second user based on the second correlation., 'a controller comprising circuitry communicatively coupled to the optical sensor and configured to2. The system of claim 1 , wherein the optical sensor and the controller are configured to cooperate to detect the plurality of handheld or ...

Подробнее
14-02-2019 дата публикации

A SYSTEM AND METHOD FOR DETECTING A PERSON INTERESTED IN A TARGET

Номер: US20190050630A1

A system for detecting a person being interested in a target includes a camera configured to capture an image of a location in front of a target area, a memory configured to store instructions and position information with respect to the target area, and a processor that executes the instructions to perform operations including receiving the image from the camera, detecting a face of person within the image, detecting a face direction in accordance with the detected face, and determining whether the person looks at the target area in accordance with the detected face direction and the stored position information with respect to the target area. 1. A system for detecting a person being interested in a target , comprising:a camera configured to capture an image of a location in front of a target area;a memory configured to store instructions and position information including an outline of the target area; receiving the image from the camera;', 'detecting a face of person within the image;', 'detecting a face direction in accordance with the detected face; and', 'determining whether the person looks at the target area in accordance with the detected face direction and the stored position information with respect to the target area. area,', 'determining that the person looks at the target area when the detected face direction is positioned within the outline position of the target area., 'a processor that executes the instructions to perform operations including2. (canceled)3. The system according to claim 1 , wherein the operations further include:calculating a line of sight, corresponding to the face direction, of the person by performing image processing on the detected face.4. The camera system according to claim 3 , wherein the operations further include:calculating an angle between the detected line of sight and a base line when viewed from a predetermined direction;calculating two reference angles based on the base line and an edge of the target area defined by ...

Подробнее
03-03-2022 дата публикации

CONTROL APPARATUSES, PHOTOGRAPHING APPARATUSES, MOVABLE OBJECTS, CONTROL METHODS, AND PROGRAMS

Номер: US20220070362A1
Принадлежит: SZ DJI Technology Co., Ltd.

A control apparatus for controlling a photographing system, the photographing system includes: a ranging sensor to measure distances of each to-be-photographed objects associated with each of a plurality of regions, the plurality of to-be-photographed objects include a first target object, the first target object is associated with a first region and a first distance; and a photographing apparatus. The control apparatus is configured to: cause the photographing apparatus to perform focus control on the first target object based on the first distance, control the photographing apparatus to obtain a plurality of images, determine, based on the plurality of images, whether a second target object is in the first region, the second target object being a moving object in the first region; and cause the photographing apparatus to perform different focus controls on the first region based on whether or not the second target object is present in the first region.

Подробнее
25-02-2021 дата публикации

SYSTEMS AND METHODS FOR SELF-LEARNING A FLOORPLAN LAYOUT USING A CAMERA SYSTEM

Номер: US20210056309A1
Автор: Mathwig Jeffrey Dean
Принадлежит:

An embodiment of the present invention is directed to a system and method for self-learning a floorplan layout. An embodiment of the present invention is directed to implementing a camera system in a location to learn, create and maintain changes to a current floor plan. The camera system may include multiple cameras positioned at strategic locations throughout a defined area. An embodiment of the present invention may determine direction and velocity of an individual's path of travel. Over a period of time, an embodiment of the present invention may systematically create, maintain and update the floor plan. The location may include various areas, including branch locations, banks, merchants, restaurants, office space, entrance way (e.g., lobby), common areas, defined area within a public space or an outdoor space, etc. 1. A method for self-learning a floorplan layout , the method comprising the steps of: receiving, from a first image capture device located at a facility, a first image;', 'receiving, from a second image capture device located at the facility, a second image;', 'identifying one or more stationary objects located at the facility;', 'recognizing, in the first image and the second image, a mobile entity relative to the one or more stationary objects with a set of known attributes;', 'determining vector data associated with the mobile entity based on the first image and the second image; and', 'responsive to the vector data, automatically generating floor layout data identifying placement of the one or more stationary objects located at the facility., 'in an information processing apparatus comprising at least one computer processor2. The method of claim 1 , wherein the first image comprises a first plurality of images or a first video.3. The method of claim 1 , the second image comprises a second plurality of images or a second video.4. The method of claim 1 , wherein the mobile entity is an individual at the facility.5. The method of claim 1 , wherein ...

Подробнее
14-02-2019 дата публикации

NEIGHBORHOOD ALERT MODE FOR TRIGGERING MULTI-DEVICE RECORDING, MULTI-CAMERA MOTION TRACKING, AND MULTI-CAMERA EVENT STITCHING FOR AUDIO/VIDEO RECORDING AND COMMUNICATION DEVICES

Номер: US20190051143A9
Принадлежит:

The present embodiments relate to improvements to audio/video (A/V) recording and communication devices, including improved approaches to using a neighborhood alert mode for triggering multi-device recording, to a multi-camera motion tracking process, and to a multi-camera event stitching process to create a series of “storyboard” images for activity taking place across the fields of view of multiple cameras, within a predetermined time period, for the A/V recording and communication devices. 1. A method for a video security system installed at a property , the video security system comprising a first camera installed at a first location at the property and a second camera installed at a second location at the property , wherein the video security system is associated with a client device , the method comprising:receiving first image data from the first camera of a first source of motion that is within a field of view of the first camera, wherein the first image data is associated with a first time stamp indicating the time when the first image data was recorded;receiving second image data from the second camera of a second source of motion that is within a field of view of the second camera, wherein the second image data is associated with a second time stamp indicating the time when the second image data was recorded;determining whether the second time stamp is within a predetermined amount of time after the first time stamp;when the second time stamp is within the predetermined amount of time after the first time stamp, creating composite image data comprising the first image data followed by the second image data; andtransmitting the composite image data to the client device.2. The method of claim 1 , wherein the predetermined amount of time is three minutes.3. The method of claim 1 , wherein the predetermined amount of time depends on a distance between the first camera and the second camera.4. The method of claim 1 , wherein the predetermined amount of time ...

Подробнее
25-02-2021 дата публикации

VIDEO OBJECT DETECTION

Номер: US20210056710A1
Принадлежит:

A method for video object detection includes detecting an object in a first video frame, and selecting a first interest point and a second interest point of the object. The first interest point is in a first region of interest located at a first corner of a box surrounding the object. The second interest point is in a second region of interest located at a second corner of the box. The second corner is diagonally opposite the first corner. A first optical flow of the first interest point and a second optical flow of the second interest point are determined. A location of the object in a second video frame is estimated by determining, in the second video frame, a location of the first interest point based on the first optical flow and a location of the second interest point based on the second optical flow. 1. A video processor , comprising:an object detection circuit configured to detect an object in a first video frame; and select a first interest point of the object from a first set of interest points based on a first optical flow associated with the first interest point of the object, the first interest point disposed in a first region of interest located at a first corner of a box surrounding the object;', 'select a second interest point of the object from a second set of interest points based on a second optical flow associated with the second interest point of the object, the second interest point disposed in a second region of interest located at a second corner of the box surrounding the object, wherein the second corner is diagonally opposite the first corner; and', 'estimate a location of the object in a second video frame by determining a location of the first interest point in the second video frame based on the first optical flow and determining a location of the second interest point in the second video frame based on the second optical flow., 'an object propagation circuit configured to2. The video processor of claim 1 , wherein the object propagation ...

Подробнее
13-02-2020 дата публикации

METHOD AND APPARATUS FOR OBTAINING VEHICLE LOSS ASSESSMENT IMAGE, SERVER AND TERMINAL DEVICE

Номер: US20200050867A1
Принадлежит:

Embodiments of the application provide a method, apparatus, server, and terminal device for obtaining a vehicle loss assessment image. A computer-implemented method for obtaining a vehicle loss assessment image comprises: receiving video data of a damaged vehicle; detecting one or more video images in the video data to identify a damaged portion in the one or more video images; classifying the one or more video images into one or more candidate image classification sets of the damaged portion based on the identified damaged portion; and selecting a vehicle loss assessment image from the one or more candidate image classification sets according to a screening condition. 1. A computer-implemented method for obtaining a vehicle loss assessment image comprising:receiving video data of a damaged vehicle;detecting one or more video images in the video data to identify a damaged portion in the one or more video images;classifying the one or more video images into one or more candidate image classification sets of the damaged portion based on the identified damaged portion; andselecting a vehicle loss assessment image from the one or more candidate image classification sets according to a screening condition.2. The computer-implemented method for obtaining a vehicle loss assessment image according to claim 1 , wherein the one or more determined candidate image classification sets comprises:a close-up image set including one or more video images displaying the damaged portion and a component image set including one or more video images displaying a vehicle component to which the damaged portion belongs.3. The computer-implemented method for obtaining a vehicle loss assessment image according to claim 2 , wherein classifying one or more video images into the close-up image set comprises:in response to determining that a ratio of an area of the damaged portion to that of a video image including the damaged portion is greater than a first preset ratio, classifying the video ...

Подробнее
22-02-2018 дата публикации

Image Target Tracking Method and System Thereof

Номер: US20180053318A1
Автор: Xiao Jingjing
Принадлежит:

An image target tracking method and system thereof are provided in the present disclosure. The image target tracking method includes the following steps: determining a relative position between a target and a camouflage interference in an image; generating a prediction trajectory according to the relative position between the target and the camouflage interference in the image; and correlating an observation sample position with the prediction trajectory to generate a correlation result, and determining whether the target is blocked and tracking the target according to the correlation result. Throughout the process, the prediction trajectory is generated based on the determined relative position between the target and the camouflage interference, and the prediction trajectory is correlated to determine whether the target is blocked and to accurately track the target. 1. An image target tracking method , comprising:determining a relative position between a target and a camouflage interference in an image;generating a prediction trajectory according to the relative position between the target and the camouflage interference in the image; andcorrelating an observation sample position with the prediction trajectory to generate a correlation result, and determining whether the target is blocked and tracking the target according to the correlation result.2. The image target tracking method according to claim 1 , wherein the step of correlating the observation sample position with the prediction trajectory to generate the correlation result claim 1 , and determining whether the target is blocked and tracking the target according to the correlation result comprises:acquiring the observation sample position in real time, and correlating the observation sample position with the prediction trajectory; andif a first correlation coefficient between the observation sample position and the target is greater than a second correlation coefficient between the observation sample ...

Подробнее
13-02-2020 дата публикации

QUEUE INFORMATION ANALYZING METHOD AND RELATED IMAGE ANALYZING APPARATUS

Номер: US20200051270A1
Принадлежит:

A queue information analyzing method is applied to an image analyzing apparatus. A monitoring image captured by the image analyzing apparatus has a triggering area. The queue information analyzing method includes identifying a first candidate object stayed within the triggering area, forming a sampling range via the first candidate object, determining whether a second candidate object stayed within the sampling range belongs to a queue of the first candidate object, and acquiring an amount and a accumulated time of candidate objects about the queue. 1. A queue information analyzing method applied to an image analyzing apparatus , a monitoring image acquired by the image analyzing apparatus having a triggering area , the queue information analyzing method comprising:identifying a first candidate object stayed within the triggering area;forming a sampling range via the first candidate object;determining whether a second candidate object stayed within the sampling range belongs to a queue of the first candidate object; andacquiring an amount and a staying time of candidate objects in the queue according to a determination result of the second candidate object.2. The queue information analyzing method of claim 1 , further comprising:acquiring a first accumulated time of the first candidate object stayed within the triggering area to compare the first accumulated time with a first time threshold; andsetting the first candidate object as a line-up object in the queue when the first accumulated time is greater than or equal to the first time threshold.3. The queue information analyzing method of claim 1 , further comprising:acquiring a second accumulated time of the second candidate object stayed within the sampling range to compare the second accumulated time with a first time threshold; andsetting the second candidate object as a line-up object in the queue when the second accumulated time is greater than or equal to the first time threshold.4. The queue information ...

Подробнее
21-02-2019 дата публикации

AUGMENTED REALITY DISPLAY METHOD BASED ON A TRANSPARENT DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY DEVICE

Номер: US20190058860A1
Автор: Jou Ming-Jong, Wang Limin

An augmented reality display method based on a transparent display device and a device are disclosed. The transparent display device is disposed between a user and an object. The method includes: determining a first three-dimensional coordinate location of a user viewpoint related to the transparent display device; determining a second three-dimensional coordinate location of the object related to the transparent display device; determining a position of the object on a viewing region of the transparent display device according to the first and the second three-dimensional coordinate locations; determining a displaying position of the auxiliary information of the object according to the position of the object; and controlling the device to display the auxiliary information of the object at the displaying position. Accordingly, the technology problem that a vision of the user is blocked when displaying the auxiliary information of the object is avoided. 1. An augmented reality display method based on a transparent display device , the transparent display device is disposed between a user and an object , and the augmented reality display method comprises:determining a first three-dimensional coordinate location of a viewpoint of a user related to a transparent display device;recognizing the object in a field at the back side of the transparent display device, and obtaining an auxiliary information of the object;determining a second three-dimensional coordinate location of the object related to the transparent display device;determining a position of the object on a viewing region of the transparent display device according to the first three-dimensional coordinate location and the second three-dimensional coordinate location; wherein the viewing region is a region that viewing the object at the viewpoint of the user through the transparent display device;determining a displaying position of the auxiliary information of the object on the transparent display device ...

Подробнее
20-02-2020 дата публикации

POPULATING DATA FIELDS IN ELECTRONIC DOCUMENTS

Номер: US20200057801A1
Принадлежит: Accenture Global Solutions Limited

Examples of systems and methods for automatic population of electronic documents are described. In an example, a digital base document having the information to be populated in a data field of the electronic document may be obtained. From the digital base document a data item to provide the information may be extracted. Further, for the digital base document, a similarity score may be computed with respect to each document type defined in predefined mapping data, the predefined mapping data including, for each document type, a weight associated with data items occurring in the document type, the weight being associated based on the importance of the data item to the document. Based on the similarity score, a document type of the digital base document may be identified. Further, based on a position of the data item in the digital base document and the identified document type, the data field may be populated. 1. A computing system comprising:a processor;a receiver coupled to the processor to receive an electronic document having a data field to be populated with information; obtain a digital base document having the information to be populated in the data field; and', 'extract a data item from the digital base document to provide the information, using a visual analytic tool, the data item comprising at least one of text data and image data, wherein the text data comprises a keyword, the keyword being consistent across multiple base documents of a given document type;, 'a data extractor coupled to the processor to,'} compute, for the digital base document, a similarity score with respect to each of the document type defined in predefined mapping data, the predefined mapping data including, for each document type, a weight associated with data items occurring in that document type, the weight being associated based on an importance of the data item to the document type; and', 'identify a document type corresponding to the digital base document, based on the similarity ...

Подробнее
20-02-2020 дата публикации

Custom Recommendations Application for Creating Cards

Номер: US20200057884A1
Принадлежит: Planet Art, LLC

A server including a processor to receive an electronic photo having at least one face from a user and compare the electronic photo with a template having a design element, and a computer implemented algorithm. The processor compares the electronic photo to the template and determines if the face is overlapped by the design element or if the face is cropped out of the photo slot. The processor presents the template combined with the electronic photo to the user only if the design element of the template does not overlap the face in the electronic photo. Multiple templates are compared to the electronic photo, and the templates are displayed based on a priority using criteria. 1. A non-transient computer readable medium including instructions executable by an electronic processor for creating a customized greeting card , comprising instructions for:a processor to receive an electronic photo having a face image and a background image that extends beyond the face image from a user, wherein the face image comprises a subset of the electronic photo and is less than the whole electronic photo;the processor to compare the face image and a background image of the electronic photo with a greeting card template having a design element;the processor to determine if the face image of the electronic photo is overlapped by the design element when compared with the design element; andthe processor to present the greeting card template combined with the electronic photo including the face image and a background image in a photo slot to the user only if the design element of the greeting card template does not overlap the face image in the electronic photo.2. The non-transient computer readable medium as specified in further including instructions for the processor to present the greeting card template combined with the electronic photo in the photo slot to the user only if the face image is not cropped out of the photo slot.3. The non-transient computer readable medium as specified ...

Подробнее
20-02-2020 дата публикации

FLOW LINE COMBINING DEVICE, FLOW LINE COMBINING METHOD, AND RECORDING MEDIUM

Номер: US20200057892A1
Автор: OSHIMA Akiko
Принадлежит: NEC Corporation

The present invention provides a technique for enhancing the added value of flow line information. The flow line combining device is provided with: an acquisition unit for acquiring first flow line information indicating a trail of positions determined by using a first method and second flow line information indicating a trail of positions determined by using a second method which is different from the first method; a determination unit for assessing overlap in the trails respectively indicated by the acquired first flow line information and the second flow line information; and a combining unit for generating third flow line information which combines the first flow line information and second flow line information if the trail overlap assessed by the determination unit meets a predetermined condition. 1. A flow line synthesis device comprising:at least one memory configured to store instructions; andat least one processor executing the instructions to perform:acquiring first flow line information representing a trajectory of a position determined by a first method, and second flow line information representing a trajectory of a position determined by a second method different from the first method;determining overlapping of trajectories respectively represented by the acquired first flow line information and second flow line information; andgenerating third flow line information which is acquired by synthesizing the first flow line information and the second flow line information, and in which overlapping of trajectories satisfies a predetermined condition.2. The flow line synthesis device according to claim 1 , whereinthe at least one processor performs determining the overlapping by comparing a position included in each of the first flow line information and the second flow line information, and a point of time at the position.3. The flow line synthesis device according to claim 1 , whereinthe first flow line information includes first additional information,the ...

Подробнее
01-03-2018 дата публикации

IMAGE PROCESSING METHOD AND APPARATUS FOR X-RAY IMAGING DEVICE

Номер: US20180061067A1
Автор: Qu Yanling, Wang Dejun
Принадлежит:

This disclosure presents an image processing method and related X-ray imaging device The method comprises: calculating a relative displacement between two first images that are already in auto registration as a first displacement vector; calculating a difference between position information fed back by a position sensor on the X-ray imaging device when imaging exposure is performed on the two first images respectively as a second displacement vector; calculating a first error of the first displacement vector relative to the second displacement vector; calculating a registration level corresponding to the first error in accordance with a pre-stored training model which is a mathematical distribution model of second errors between a plurality of third displacement vectors and a plurality of corresponding fourth displacement vectors; and labeling the registration level on the two first images that are already in auto registration. 1. An image processing method for an X-ray imaging device , comprising the following steps:calculating a relative displacement between two first images that are already in auto registration as a first displacement vector;calculating a difference between position information fed back by a position sensor on the X-ray imaging device when imaging exposure is performed on the two first images respectively as a second displacement vector;calculating a first error of the first displacement vector relative to the second displacement vector;calculating a registration level corresponding to the first error in accordance with a pre-stored training model, the registration level representing a degree of accuracy of the auto registration performed on the two first images, wherein the training model is: a mathematical distribution model of second errors between a plurality of third displacement vectors and a plurality of corresponding fourth displacement vectors; each third displacement vector representing a displacement vector between two registered ...

Подробнее
01-03-2018 дата публикации

FAST MULTI-OBJECT DETECTION AND TRACKING SYSTEM

Номер: US20180061076A1
Принадлежит:

The present invention relates to the fast multi-object detection and tracking system. According to the system of the present invention, only a few frames are to be detected and the other frames are split into different size of steady motion segments in a binary way, and then the system of the present invention can predicted accurately. The system can help to achieve high tracking speed with multi-persons in high-definition videos and gain high accuracy as well. 1. A multi-object detection and tracking system , comprising:a) preprocess unit which is configured to select a plurality of first sampled frames to divide a sequence of images into frame cells;b) global object detector which is configured to perform object detection on the whole image of the first sampled frames, and give out global detection results;c) frame sampling unit which is configured to select at least one second sampled frame in a frame cell;d) local object detector which is configured to perform object detection on the region-of-interest of the second sampled frames, and output local detection results;e) data association unit which is configured to align the global detection results and the local detection results with existed trajectories by object similarity.2. The system of claim 1 , wherein the first sampled frames are selected by uniformly sampling the sequence of images with a predetermined interval N claim 1 , N≧1.3. The system of claim 1 , wherein the frame cell is composed of two of the first sampled frames and all the frames between them.4. The system of claim 1 , wherein claim 1 , the frame cells are sequentially processed for object tracking.5. The system of claim 1 , wherein the global object detector and the local object detector are the same in terms of classifiers.6. The system of claim 1 , wherein the second sampled frames splitting each frame cell into different sizes of steady motion segments are determined according to the motion attributes of the tracking object.7. The system ...

Подробнее
04-03-2021 дата публикации

INTELLIGENT CABLING AND CONNECTION VALIDATION

Номер: US20210065348A1

A method is disclosed to ensure that components in a complex system are correctly connected together. In one embodiment, such a method captures a visual image of a system made up of multiple components connected together with cables. The method analyzes the visual image to determine connections between the components. The method further builds a current model that represents the connections between the components. This current model is then compared to a previous model to find differences between the current model and the previous model. If differences exist, the method notifies a user of the differences. This may assist the user in identifying any incorrect connections between the components. A corresponding apparatus and computer program product are also disclosed. 1. A method to ensure that components in a complex system are correctly connected together , the method comprising:capturing a visual image of a system comprising a plurality of components connected together with cables;analyzing the visual image to determine connections between the components;building a current model that represents the connections between the components;comparing the current model to a previous model to find differences between the current model and the previous model; andnotifying a user of the differences.2. The method of claim 1 , wherein the previous model reflects a connective state of the system prior to the current model.3. The method of claim 1 , wherein the previous model reflects a connective state of a default or ideal system.4. The method of claim 1 , wherein analyzing the visual image comprises identifying the cables that are utilized between the components.5. The method of claim 1 , wherein analyzing the visual image comprises identifying ports that are utilized on the components.6. The method of claim 1 , wherein analyzing the visual image comprises identifying the components.7. The method of claim 1 , wherein notifying the user comprises notifying the user of incorrect ...

Подробнее
04-03-2021 дата публикации

METHOD AND SYSTEM FOR CALCULATING SPATIAL COORDINATES OF REGION OF INTEREST, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Номер: US20210065397A1
Принадлежит: VTouch Co., Ltd.

A method includes acquiring information on an in-image coordinate point of a region of interest contained in each of a plurality of images respectively photographed by a plurality of image modules; specifying, with reference to information on a position where at least one of the plurality of image modules is installed and information on an in-image coordinate point of a target region of interest contained in an image photographed by the at least one image module, a candidate figure containing a position where the target region of interest is located in a reference space; and specifying the position where the target region of interest is located in the reference space, with reference to a positional relationship between a first candidate figure of the target region of interest corresponding to a first image module and a second candidate figure of the target region of interest corresponding to a second image module. 1. A method for calculating a spatial coordinate point of a region of interest , the method comprising the steps of:acquiring information on an in-image coordinate point of a region of interest contained in each of a plurality of images respectively photographed by a plurality of image modules;specifying, with reference to information on a position where at least one of the plurality of image modules is installed and information on an in-image coordinate point of a target region of interest contained in an image photographed by the at least one image module, a candidate figure containing a position where the target region of interest is located in a reference space; andspecifying the position where the target region of interest is located in the reference space, with reference to a positional relationship between a first candidate figure of the target region of interest corresponding to a first image module and a second candidate figure of the target region of interest corresponding to a second image module.2. The method of claim 1 , wherein in the step of ...

Подробнее
17-03-2022 дата публикации

INFORMATION PROCESSING DEVICE AND PROGRAM

Номер: US20220083947A1
Автор: KURATA Masachika
Принадлежит: TOSHIBA TEC KABUSHIKI KAISHA

An information processing device includes a processor configured to identify, based on a captured image obtained by an imaging device that images a predetermined area, a worker present in the area, and recognize, based on the captured image, an action of the worker identified. The processor is further configured to record, if the action recognized is predetermined work content, correlating with identification information for identifying the worker, a work achievement including a type of the work content and information indicating a date and time when work of the work content was performed and recording the work achievement, and output information based on the work achievement recorded. 1. An information processing device comprising:{'claim-text': ['identify, based on a captured image obtained by an imaging device that images a predetermined area, a worker present in the predetermined area;', 'recognize, based on the captured image, an action of the worker identified;', 'record, if the action recognized is predetermined work content, correlating with identification information for identifying the worker identified, a work achievement including a type of the work content and information indicating a date and time when work of the work content was performed and recording the work achievement; and', 'output information based on the work achievement recorded.'], '#text': 'a processor configured to:'}2. The device of claim 1 , wherein the information output based on the work achievement claim 1 , in a visualized state claim 1 , is a number of times of each type of the work content performed in a predetermined period.3. The device of claim 1 , wherein the output information is the work achievements of a plurality of the workers in a comparable state.4. The device of claim 1 , wherein the processor is further configured to:calculate, based on the work achievement, for each type of the work content, a number of times of work per unit time of the worker; andoutput a ...

Подробнее
28-02-2019 дата публикации

Localization-Aware Active Learning for Object Detection

Номер: US20190065908A1
Принадлежит:

System and method for an active learning system including a sensor obtains data from a scene including a set of images having objects. A memory to store active learning data including an object detector trained for detecting objects in images. A processor in communication with the memory, is configured to detect a semantic class and a location of at least one object in an image selected from the set of images using the object detector to produce a detection metric as a combination of an uncertainty of the object detector about the semantic class of the object in the image (classification) and an uncertainty of the object detector about the location of the object in the image (localization). Using an output interface or a display type device, in communication with the processor, to display the image for human labeling when the detection metric is above a threshold. 1. An active learning system , comprising:an input interface to receive a set of images of a scene from a sensor;a memory to store active learning data that includes an object detector trained for detecting objects in images; 'detect a semantic class and a location of at least one object in an image selected from the set of images using the object detector to produce a detection metric as a combination of an uncertainty of the object detector about the semantic class of the object in the image and an uncertainty of the object detector about the location of the object in the image; and', 'a processor in communication with the input interface and the memory, is configured toan output interface in communication with the processor, to display the image for human labeling when the detection metric is above a threshold.2. The active learning system of claim 1 , wherein the object detector detects the location of the at least one object in the image claim 1 , bygenerating multiple boxes of different scales and aspect ratios over each image for the set of the images,comparing for each box, pixels within each image ...

Подробнее
28-02-2019 дата публикации

METHOD, APPARATUS, TERMINAL AND SYSTEM FOR MEASURING TRAJECTORY TRACKING ACCURACY OF TARGET

Номер: US20190066334A1
Автор: Gu Yu, Tang Xiaojun
Принадлежит:

Described herein are a method, apparatus, terminal, and system for measuring a trajectory tracking accuracy of a target. Using each method, apparatus, terminal, and system to measure the trajectory tracking accuracy of the target includes determining a location information of the actual tracking trajectory of the target; comparing the location information of the actual tracking trajectory with a location information of the target trajectory to determine a variance between the location information of the actual tracking trajectory and the location information of the target trajectory; and determining the tracking accuracy of the target based on the variance. 1. A method for measuring a trajectory tracking accuracy of a target , comprising:determining a location information of an actual tracking trajectory of the target;comparing the location information of the actual tracking trajectory with a location information of a target trajectory to determine a variance between the location information of the actual tracking trajectory and the location information of the target trajectory; anddetermining the tracking accuracy of the target based on the variance.2. The method for measuring the trajectory tracking accuracy of the target of claim 1 , wherein determining the location information of the actual tracking trajectory of the target comprises:acquiring an image data of the actual tracking trajectory of the target, anddetermining the location information of the actual tracking trajectory of the target based on the image data.3. The method for measuring the trajectory tracking accuracy of the target of claim 1 , further comprising:establishing a world coordinate system, a camera coordinate system and an image coordinate system, before determining the location information of the actual tracking trajectory of the target.4. The method for measuring the trajectory tracking accuracy of the target of claim 3 , wherein determining the location information of the actual tracking ...

Подробнее
11-03-2021 дата публикации

MOBILE WORK MACHINE WITH OBJECT DETECTION USING VISION RECOGNITION

Номер: US20210072764A1
Автор: Kean Michael G.
Принадлежит:

A method of controlling a mobile work machine on a worksite includes receiving an indication of an object detected on the worksite, determining a location of the object relative to the mobile work machine, receiving an image of the worksite, correlating the determined location of the object to a portion of the image, evaluating the object by performing image processing of the portion of the image, and generating a control signal that controls the mobile work machine based on the evaluation. 1. A method of controlling a mobile work machine on a worksite , the method comprising:receiving an indication of an object detected on the worksite;determining a location of the object relative to the mobile work machine;receiving an image of the worksite;correlating the determined location of the object to a portion of the image;evaluating the object by performing image processing of the portion of the image; andgenerating a control signal that controls the mobile work machine based on the evaluation.2. The method of claim 1 , wherein receiving the indication of the object comprises:transmitting a detection signal;receiving reflections of the detection signal; anddetecting the object based on the received reflections.3. The method of claim 2 , wherein evaluating the object comprises determining a likelihood that the detection of the object comprises a false positive detection.4. The method of claim 2 , wherein the detection signal comprises a radio frequency (RF) signal.5. The method of claim 4 , wherein the RF signal comprises a radar signal.6. The method of claim 1 , wherein the determined location of the object is correlated to a portion of the image based on a mounting location of the camera on the mobile work machine and a field of view of the camera.7. The method of claim 1 , wherein receiving an image comprises receiving a time-series of images from a camera claim 1 , and further comprising visually tracking a location of the object in a plurality of subsequently ...

Подробнее
11-03-2021 дата публикации

Custom Recommendations Application for Creating Cards

Номер: US20210073519A1
Принадлежит:

A server including a processor to receive an electronic photo having at least one face from a user and compare the electronic photo with a template having a design element, and a computer implemented algorithm. The processor compares the electronic photo to the template and determines if the face is overlapped by the design element or if the face is cropped out of the photo slot. The processor presents the template combined with the electronic photo to the user only if the design element of the template does not overlap the face in the electronic photo. Multiple templates are compared to the electronic photo, and the templates are displayed based on a priority using criteria. 1. A non-transitory computer readable medium including instructions executable by an electronic processor for creating a customized greeting card , comprising instructions for:a processor to process an electronic photo having a face image of a person and a background image that extends beyond the face image, wherein the face image comprises a subset of the electronic photo and is less than the whole electronic photo, and wherein the background image does not comprise any portion of the face image;the processor to compare the face image of the electronic photo with a plurality of greeting card templates each having a design element;for each greeting card template of the plurality of greeting card templates, the processor to:compare the electronic photo with respect to the greeting card template to determine multiple positions of the electronic photo with respect to the greeting card template where the respective design element does not overlap the face image of the electronic photo;determine a leftmost position and a rightmost position of the electronic photo with respect to the greeting card template without having a portion of the respective design element overlap a portion of the face image of the electronic photo; anddisplay the whole electronic photo combined with the greeting card template ...

Подробнее
15-03-2018 дата публикации

IMAGE DISPLAY SYSTEM

Номер: US20180074778A1
Принадлежит:

Image display system comprises: a plurality of display devices ; an image processing unit for receiving input of a plurality of input video signals and generating an output video signal for each display device in accordance with a layout for the display devices from the input video signals; a pattern signal generation unit for generating pattern signals indicating a plurality of different test pattern images; a selector for receiving input of the output video signals and the pattern signals and selecting and outputting either the output video signals or the pattern signals; imaging device for capturing an image of the test pattern images displayed on the respective display devices; and a control device for analyzing the captured image captured by the imaging device and generating control information for controlling the image processing unit, based on analysis results. 1. An image display system , comprising:a plurality of display devices arranged in an arbitrary layout;an image processing unit for receiving input of a plurality of input video signals and generating an output video signal for each display device in accordance with the layout from the input video signals;a pattern signal generation unit for generating pattern signals respectively indicating a plurality of different test pattern images;a selector for receiving input of the output video signals and the pattern signals and selecting and outputting either the output video signals or the pattern signals;an imaging device for capturing an image of the test pattern images displayed on the respective display devices; anda control device for analyzing the captured image captured by the imaging device and generating control information for controlling the image processing unit, based on analysis results.2. The image display system according to claim 1 , wherein the control device displays a user interface screen on which first objects indicating the respective display devices are displayed claim 1 , based on ...

Подробнее
07-03-2019 дата публикации

SELF-LEARNING SPATIAL RECOGNITION SYSTEM

Номер: US20190073788A1
Автор: Idrisov Renat
Принадлежит:

A method includes detecting a first object entering a first video frame of a plurality of video frames of a view of a geolocation and determining, from the plurality of video frames, that the first object has stopped in an area of the geolocation for at least a threshold amount of time. The method also includes detecting the first object leaving a second video frame of the plurality of video frames, and identifying, by a computer processing device, the area of the geolocation as a region of interest based on the detecting the first object leaving. 1. A method , comprising:detecting a first object entering a first video frame of a plurality of video frames of a view of a geolocation;detecting the first object leaving a second video frame of the plurality of video frames; andidentifying, by a computer processing device, an area of the geolocation as a region of interest based on the detecting the first object leaving.2. The method of claim 1 , comprising:generating, based on the region of interest, a map comprising the geolocation.3. The method of claim 2 , comprising:determining that a second object occupies the region of interest; andmarking the region of interest on the map as occupied.4. The method of claim 2 , comprising:determining that the region of interest has not been occupied for a predetermined amount of time; andremoving the region of interest from the map.5. The method of claim 1 , wherein the region of interest comprises a parking spot claim 1 , and wherein the first object comprises a vehicle and wherein the method further comprises determining claim 1 , from the plurality of video frames claim 1 , that the first object has stopped in the area of the geolocation for at least a threshold amount of time.6. The method of claim 1 , comprising:receiving the plurality of video frames from a live video source.7. The method of claim 6 , wherein the plurality of video frames are received from the live video source via a publish-subscribe communication system.8. ...

Подробнее
15-03-2018 дата публикации

Method of Estimating Relative Motion Using a Visual-Inertial Sensor

Номер: US20180075609A1
Автор: He Hongsheng
Принадлежит:

A method of determining translational motion of a moving object within a field of view of a camera includes: providing an imaging device oriented to capture a moving object within a field of view from a point of view of the device; accelerating the central point of the imaging device around a line of sight; processing visual data from the imaging device on a processing unit to determine a visual optical flow or feature flow in the field of view of the device; measuring an acceleration of the camera around the line of sight; and determining a translational velocity of a moving object within the field of view of the imaging device based on the determined visual optical flow of the field of view and measured acceleration of the point of view of the imaging device. 1. A method of determining translational motion of a moving object within a field of view of a camera , the method comprising:providing an imaging device oriented to capture a moving object within a field of view from a point of view of the device;accelerating the central point of the imaging device around a line of sight;processing visual data from the imaging device on a processing unit to determine a visual optical flow or feature flow in the field of view of the device;measuring an acceleration of the camera around the line of sight; anddetermining a translational velocity of a moving object within the field of view of the imaging device based on the determined visual optical flow of the field of view and measured acceleration of the point of view of the imaging device.2. The method of claim 1 , further comprising measuring the acceleration of the imaging device around the line of sight with an inertial measurement unit associated with the imaging device.3. The method of claim 2 , wherein the central point of the imaging device is accelerated on a turntable such that the imaging device is accelerated around the central point of the imaging device.4. The method of claim 1 , wherein the imaging device ...

Подробнее
15-03-2018 дата публикации

MODEL-BASED THREE-DIMENSIONAL HEAD POSE ESTIMATION

Номер: US20180075611A1
Принадлежит:

One embodiment of the present invention sets forth a technique for estimating a head pose of a user. The technique includes acquiring depth data associated with a head of the user and initializing each particle included in a set of particles with a different candidate head pose. The technique further includes performing one or more optimization passes that include performing at least one iterative closest point (ICP) iteration for each particle and performing at least one particle swarm optimization (PSO) iteration. Each ICP iteration includes rendering the three-dimensional reference model based on the candidate head pose associated with the particle and comparing the three-dimensional reference model to the depth data. Each PSO iteration comprises updating a global best head pose associated with the set of particles and modifying at least one candidate head pose. The technique further includes modifying a shape of the three-dimensional reference model based on depth data. 1. A non-transitory computer-readable medium including instructions that , when executed by a processor , cause the processor to perform the steps of:obtaining depth data associated with a head of a user; and [ rendering a three-dimensional reference model based on a candidate head pose associated with the particle,', 'comparing the three-dimensional reference model to the depth data to determine at least one error value, and', 'modifying the candidate head pose associated with the particle based on the at least one error value,, 'performing at least one iterative closest point (ICP) iteration for each particle include in a set of particles, wherein each ICP iteration comprises, updating a global best head pose associated with the set of particles, and', 'modifying at least one candidate head poses associated with the set of particles based on the global best head pose, and, 'performing at least one particle swarm optimization (PSO) iteration, comprising, 'modifying a shape of the three- ...

Подробнее
24-03-2022 дата публикации

SKELETON RECOGNITION METHOD, COMPUTER-READABLE RECORDING MEDIUM STORING SKELETON RECOGNITION PROGRAM, SKELETON RECOGNITION SYSTEM, LEARNING METHOD, COMPUTER-READABLE RECORDING MEDIUM STORING LEARNING PROGRAM, AND LEARNING DEVICE

Номер: US20220092302A1
Автор: Asayama Yoshihisa
Принадлежит: FUJITSU LIMITED

A computer-implemented method of skeleton recognition, the method including: acquiring, from a distance image of an object, a learning model that recognizes heat map images obtained by projecting likelihoods of a plurality of joint positions of the object from a plurality of directions; inputting a distance image to be processed to the learning model and acquiring heat map images in each of the plurality of directions; calculating three-dimensional coordinates regarding the plurality of joint positions of the object, using the heat map images in each of the plurality of directions and information that indicates a relative positional relationship of the plurality of directions; and outputting a skeleton recognition result that includes the three-dimensional coordinates regarding the plurality of joint positions. 1. A computer-implemented method of skeleton recognition , the method comprising:acquiring, from a distance image of an object, a learning model that recognizes heat map images obtained by projecting likelihoods of a plurality of joint positions of the object from a plurality of directions;inputting a distance image to be processed to the learning model and acquiring heat map images in each of the plurality of directions;calculating three-dimensional coordinates regarding the plurality of joint positions of the object, using the heat map images in each of the plurality of directions and information that indicates a relative positional relationship of the plurality of directions; andoutputting a skeleton recognition result that includes the three-dimensional coordinates regarding the plurality of joint positions.2. The computer-implemented method according to claim 1 , whereinthe calculating includes:calculating, based on the heat map images, two-dimensional coordinates of the joint positions of the object in a case of viewing the object from each of the plurality of directions; andcalculating the three-dimensional coordinates by using the two-dimensional ...

Подробнее
24-03-2022 дата публикации

IMAGE PROCESSING METHOD AND DEVICE, ELECTRONIC APPARATUS AND STORAGE MEDIUM

Номер: US20220092325A1
Автор: Li Qing, XU Qingsong
Принадлежит: HANGZHOU GLORITY SOFTWARE LIMITED

An image processing method includes obtaining an input image, the input image includes object regions, and each object region includes at least one object; identifying the object regions to obtain object region labeling frames; establishing a first coordinate system based on the input image, the object region labeling frames are located in the first coordinate system; mapping the object region labeling frames from the first coordinate system to a second coordinate system according to a reference value to obtain first region labeling frames corresponding to the object region labeling frames, the first region labeling frames are located in the second coordinate system; performing expansion processing on the first region labeling frames to obtain expanded second region labeling frames; and mapping the second region labeling frames from the second coordinate system to the first coordinate system according to the reference value to obtain table region labeling frames. 1. An image processing method , adapted to an image processing device , the image processing method comprising:obtaining an input image, wherein the input image comprises a plurality of object regions, and each object region of the plurality of object regions comprises at least one object;identifying the object regions in the input image to obtain a plurality of object region labeling frames corresponding to the object regions one-to-one;establishing a first coordinate system based on the input image, wherein the object region labeling frames are located in the first coordinate system;mapping the object region labeling frames from the first coordinate system to a second coordinate system according to a reference value to obtain a plurality of first region labeling frames corresponding to the object region labeling frames one-to-one, wherein the first region labeling frames are located in the second coordinate system;performing expansion processing on the first region labeling frames to obtain a plurality of ...

Подробнее
24-03-2022 дата публикации

Concept for Generating Training Data and Training a Machine-Learning Model for Use in Re-Identification

Номер: US20220092348A1
Принадлежит:

Examples relate to a concept for generating training data and training a machine-learning model for use in re-identification. A computer system for generating training data for training a machine-learning model for use in re-identification comprising processing circuitry configured to obtain media data, the media data comprising a plurality of samples representing a person, an animal or an object. The processing circuitry is configured to process the media data to identify tuples of samples that represent the same person, animal or object. The processing circuitry is configured to generate the training data based on the identified tuples of samples that represent the same person, animal or object. 1. A computer system for generating training data for training a machine-learning model for use in re-identification , the computer system comprising processing circuitry configured to:obtain media data, the media data comprising a plurality of samples representing a person, an animal or an object;process the media data to identify tuples of samples that represent the same person, animal or object; andgenerate the training data based on the identified tuples of samples that represent the same person, animal or object.2. The computer system according to claim 1 , wherein each sample is associated with secondary information characterizing the sample or the person claim 1 , animal or object represented by the sample claim 1 , and wherein the processing circuitry is configured to identify the tuples of samples that represent the same person claim 1 , animal or object based on the secondary information being associated with the respective samples.3. The computer system according to claim 2 , wherein claim 2 , if the media data comprises a sequence of image samples claim 2 , the secondary information characterizes a position of the respective sample within the sequence of image samples claim 2 , wherein the tuples of samples that represent the same person claim 2 , animal or ...

Подробнее
05-03-2020 дата публикации

IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS METHOD, AND RECORDING MEDIUM

Номер: US20200074612A1
Автор: Iwamatsu Yosuke
Принадлежит: NEC Corporation

The present invention reduces the amount of data outputted while maintaining the accuracy of an analysis process with a small delay amount, by an image analysis apparatus provided with: a deduction unit that deduces a second quality concerning an object in a second image, which is different from a first image associated with object data relating to an object to be inputted, on the basis of a first quality concerning the object relating to the object data and on the basis of the state of the object in the second image, the state being obtained by using a state model for deducing the position and the size of the object from the object data, while using a quality model for deducing the second quality concerning the object; and a determination unit that determines whether to use the object data for analysis on the basis of the deduced second quality. 1. An image analysis apparatus comprising:deduction unit configured to deduce second quality regarding an object in a second image different from a first image related to object data regarding the object, the object being to be input to the image analysis apparatus, the deducing being performed by using a state of the object in the second image, the state being acquired by using a state model for deducing a position and a size of the object from the object data, using first quality regarding the object related to the object data, and using a quality model for deducing the second quality; anddetermination unit configured to determine whether or not to use the object data for analysis, based on the deduced second quality.2. The image analysis apparatus according to claim 1 , whereinthe state model is a model for deducing a position, a size, and a direction of the object, andthe deduction unit deduces the second quality, based on at least any of the position, the size, and the direction of the object in the second image.3. The image analysis apparatus according to claim 1 , whereinthe determination unit determines to use the ...

Подробнее
24-03-2022 дата публикации

METHOD TO ADAPT AUDIO PROCESSING BASED ON USER ATTENTION SENSING AND SYSTEM THEREFOR

Номер: US20220095074A1
Принадлежит:

A method may include capturing an image at a camera included at an information handling system, the camera coupled to a vision system. A position of a user relative to a display device may be determined based on analysis of the image by the vision system. The method may further include adjusting properties of an audio signal provided to a speaker based on the position of the user. 1. An information handling system comprising:a display device;a first speaker;a camera to capture an image, the camera coupled to a vision system;a sensor hub coupled to the vison system; and determine a position of a user relative to the display device based on analysis of the image by the vision system; and', 'adjust properties of a first audio signal provided to the first speaker based on the position of the user., 'a software service to2. The information handling system of claim 1 , further comprising:a time of flight proximity sensor coupled to the sensor hub, the time of flight proximity sensor to determine a distance from the display device to the user,wherein the software service is further to adjust properties of the first audio signal based on the distance.3. The information handling system of claim 1 , further comprising:a first microphone,wherein the software service is further to adjust properties of a second audio signal received from the first microphone based on the position of the user.4. The information handling system of claim 3 , further comprising determine the position of the user relative to the display device further based on the second audio signal and a third audio signal received from a second microphone.5. The information handling system of claim 1 , wherein the software service is further to:determine a gaze direction of the user based the analysis; andadjust properties of the first audio signal based on the gaze direction.6. The information handling system of claim 1 , wherein the adjusting comprises modifying amplitude of the first audio signal.7. The ...

Подробнее
18-03-2021 дата публикации

FACIAL RECOGNITION BASED AUTO ZOOM

Номер: US20210081648A1
Принадлежит:

A smart device having a photo processing system, and a related program product and method for processing photos. The photo processing system includes: a detector that detects when a photo is displayed on the smart device; an auto capture system that captures a viewer image from a front facing camera on the smart device in response to detecting that the photo is being displayed; a facial matching system that determines whether the viewer image matches any face images in the photo; and an auto zoom system that enlarges and displays a matched face image from the photo. 1. A smart device having a photo processing system , comprising:a detector that detects when a photo is displayed on the smart device;an auto capture system that captures a viewer image from a front facing camera on the smart device in response to detecting that the photo is being displayed;a facial matching system that determines whether the viewer image matches any face images in the photo; andan auto zoom system that enlarges and displays a matched face image from the photo.2. The smart device of claim 1 , wherein the detector analyzes the photo using a face detection algorithm to determine whether the photo comprises a group photo.3. The smart device of claim 2 , wherein the auto capture system claim 2 , facial matching system and auto zoom system are activated only when the photo comprises a group photo.4. The smart device of claim 1 , wherein the facial matching system uses facial recognition.5. The smart device of claim 1 , wherein the auto zoom system places a bounding box around the matched face image and then enlarges an area of the photo in the bounding box.6. The smart device of claim 1 , wherein the facial matching system determines whether the viewer image includes two captured face images and in response to a determination of two captured face images claim 1 , the facial matching system determines whether the two captured face images match two face images in the photo.7. The smart device ...

Подробнее
05-03-2020 дата публикации

SURVEILLANCE APPARATUS, SURVEILLANCE METHOD, AND STORAGE MEDIUM

Номер: US20200077050A1
Принадлежит:

An apparatus acquires a video image from a surveillance camera and displays an image based on the video image on a display . The apparatus records the video image acquired from the surveillance camera in a recording unit . The apparatus includes an abnormality detection unit configured to detect an abnormality from the video image, an attention period determination unit configured to determine an attention period based on a period from a start to an end of a detected abnormality, and a displaying unit configured to display a video image acquired from the surveillance camera as it is on a display until the abnormality being detected ends, and, when the abnormality being detected ends, acquire, from a recording unit , a recorded video image recorded in a period corresponding to the attention period and play back the acquired recorded video image. 1. A surveillance apparatus comprising:an acquisition unit configured to acquire a video image captured by an image capturing apparatus;a displaying unit configured to display an image based on the video image on a display apparatus;a recording unit configured to record the video image;a detecting unit configured to detect an abnormality from the video image;a determination unit configured to determine an attention period based on a period from a start to an end of the abnormality detected by the detecting unit; anda control unit configured to control the displaying unit such that the video image acquired by the acquisition unit is displayed, as the video image is, by the displaying unit until the detection of the abnormality ends, while when the detection of the abnormality ends, a recorded video image recorded in the attention period is acquired from the recording unit and played back by the displaying unit.2. The surveillance apparatus according to claim 1 , further comprising an operation accepting unit configured to accept an instruction to end the playback of the recorded video image claim 1 ,wherein the displaying unit ...

Подробнее